Sunday, June 14, 2020

I attend, therefore I am

Weekend reads
Saturday 13 June 2020






Philosophy of mindEditors’ pick
I attend, therefore I am
You are only as strong as your powers of attention, and other uncomfortable truths about the self
by Carolyn Dicey Jennings


I attend, therefore I am

You are only as strong as your powers of attention, and other uncomfortable truths about the self




An art student paints a picture for a school exhibition. 1961. Photo by Eve Arnold/Magnum
Carolyn Dicey Jennings
is assistant professor of philosophy and cognitive science at University of California, Merced. Her research has been published in SyntheseJournal of the American Philosophical AssociationConsciousness & Cognition, and Journal of Consciousness Studies. She lives in Merced.






Listen here
Brought to you by Curio, an Aeon partner
3,600 words
Edited by Sam Dresser


You have thoughts, feelings and desires. You remember your past and imagine your future. Sometimes you make a special effort, other times you are content to simply relax. All of these things are true about you. But do you exist? Is your sense of self an illusion, or is there something in the world that we can point to and say: ‘Ah, yes – that is you’? If you are familiar with the contemporary science of mind, you will know that the concept of a substantive self, separate from the mere experience of self, is unpopular. But this stance is unwarranted. Research on attention points to a self beyond experience, with its own powers and properties.
So what is attention? Attention is what you use to drown out distracting sights and sounds, to focus on whatever it is you need to focus on. You are using attention to read this, right now. It is something that you can control and maintain but it is also strongly influenced by the world around you, which encourages you to focus on new and different stimuli. Sometimes being encouraged to change focus can be good – it is good that you look up from your cellphone when a bike comes barrelling down the sidewalk, for example. But this encouragement can also keep you from completing tasks, as when you get caught in a spiral of mindless clickbait. You might think of your powers of attention as what you use to control the focus of your attention, away from distractions and toward your favoured point of focus.
This same power of attention – what you use in everyday life to stay on task – is what helps you in moments of conflict more generally – moments when you are caught between two (or more) options, both of which appeal to you, and you are torn on which option to choose. The philosopher Robert Kane has a way of talking about these life-defining moments: they are ‘self-forming actions’. Kane’s idea is that our truest expressions of ourselves come at moments in which our will is divided. At such moments, we could go either of two ways, but we go one way, and in doing so we help set in place some feature of ourselves – the feature that aligns with the chosen path.
Imagine that while job-hunting you receive two offers, only one of which is in your current field. The job in your field would provide security and good conditions, but you have come to find yourself more interested in the new field. The job in the new field would be risky, with less security and more challenging conditions, but you hope that it will lead to better opportunities in the future. What should you do?
For Kane, the effort of choosing between these two halves of yourself – the half that is concerned about security and the half that desires change – creates conflict in the brain that can be resolved only through a combination of quantum indeterminacy and chaotic amplification. While this might seem implausible on its face, Kane’s proposed mechanism has some evidentiary support. The result is a self-forming action in two respects. We are responsible for forming the action, whatever the outcome, by putting our efforts behind each of two opposing outcomes and forcing a resolution. And the outcome helps to shape our future self, in that it favours one of two hitherto conflicting motivations.
Although Kane does not explicitly mention attention, it is clear that attention is an essential part of this picture. When faced with conflicting options, we attend to them in turn. You turn your attention from the security of one job to the excitement of the other. Sometimes attention helps to determine the outcome, as when we focus more on either security or excitement. Other times our attention creates the conditions for indeterminacy, as we effortfully keep both options afloat. Either way, attention plays a crucial role.
Would self-forming actions still occur without all this effort of attention? What if the two options – the two halves of ourselves – simply battled it out on their own? Wouldn’t it be a self-forming action regardless of how the conflict is resolved? Let’s call this the Frostian Concern, after Robert Frost’s poem ‘The Road Not Taken’ (1916). In the poem, Frost is confronted with two paths in the woods that appear ‘really about the same’, and chooses to walk down one, predicting that he will in the future say:
Two roads diverged in a wood, and I –
I took the one less travelled by,
And that has made all the difference.
The Frostian Concern is that the other path is just as likely to have ‘made all the difference’. In that case, Frost’s future self would have remembered some other difference in the paths to justify his choice, and that would have been woven into his life story. In this view, the effort of attention is not required to form the self – our choices could be entirely determined, or entirely random, and a self with an explanatory narrative would still be formed.
This is a good objection. It fits a line of reasoning that is currently popular in cognitive science: not only is attention not required to form the self, there is no real self at all. In so far as the self exists, it is simply part of a story that we tell ourselves and others. As the neuroscientist Anil Seth puts it: ‘I predict (myself), therefore I am.’ We, biological and minded beings, construct the concept of a self because it is the best way of explaining to ourselves and others certain aspects of our behaviour. When you accidentally knock over something, you might say, for example: ‘I didn’t do that, it was an accident.’ In such cases, it is helpful to have the concept of ‘I’ to distinguish the unintended movements of your body from the intended movements of your body. ‘I’ did this, my body did that.
Once you start using this concept, you are not far from constructing a full-fledged self, with preferences and tendencies. But this doesn’t mean that there really is anything substantive that accounts for all of these happenings – it is enough that in each case there is an intention, a goal, and that you are able to identify and communicate which behaviours are connected to that goal, without there being a further source of these goals and behaviours. Perhaps ‘I didn’t do that, it was an accident’ is just shorthand for ‘There wasn’t an intention to do that, it was an accident.’
Following such considerations, the philosopher Daniel Dennett proposed that the self is simply a ‘centre of narrative gravity’ – just as the centre of gravity in a physical object is not a part of that object, but a useful concept we use to understand the relationship between that object and its environment, the centre of narrative gravity in us is not a part of our bodies, a soul inside of us, but a useful concept we use to make sense of the relationship between our bodies, complete with their own goals and intentions, and our environment. So, you, you, are a construct, albeit a useful one. Or so goes Dennett’s thinking on the self.
And it isn’t just Dennett. The idea that there is a substantive self is passé. When cognitive scientists aim to provide an empirical account of the self, it is simply an account of our sense of self – why it is that we think we have a self. What we don’t find is an account of a self with independent powers, responsible for directing attention and resolving conflicts of will.
There are many reasons for this. One is that many scientists think that the evidence counts in favour of our experience in general being epiphenomenal – something that does not influence our brain, but is influenced by it. In this view, when you experience making a tough decision, for instance, that decision was already made by your brain, and your experience is mere shadow of that decision. So for the very situations in which we might think the self is most active – in resolving difficult decisions – everything is in fact already achieved by the brain.
In support of this view, it is common to cite Benjamin Libet’s brain experiments of the 1980s, or Daniel Wegner’s book The Illusion of Conscious Will (2002). Yet, these findings don’t come close to showing that our experience is epiphenomenal.
Demonstrating the existence of illusions of will is not the same as demonstrating the absence of will
Libet’s experiments show that we can predict a participant’s choice to flex her wrist or finger at a specific time through brain monitoring before the participant claims to have made that choice. But Libet and others have noted that the participant is able to change her mind even after that prediction is made, in which case nothing is flexed. So it doesn’t make sense to think that our prediction is based on a final decision made by the brain that’s out of the participant’s control. (Whether choosing a time at which to flex our wrist or finger is sufficiently similar to making a difficult decision is another matter.)
Wegner’s book shows only that participants are subject to illusions of will. Using a device similar to a ouija board, for example, participants sometimes overestimate their influence on the device when it is in fact moved by someone else. But demonstrating the existence of illusions of will is not the same as demonstrating the absence of will. Compare this to the Moon Illusion, in which we judge the Moon to be much larger when it is at the horizon than when it is higher in the night sky – we don’t conclude that the Moon doesn’t exist simply because we overestimate its size in certain cases.
So what gives?
The ultimate source of this trend is an old-fashioned worldview. Most agree that the predictive power of science reveals a Universe that can be captured by laws. When we make an error in prediction, it is because we have not yet discovered the right law. The old-fashioned worldview is that these laws should privilege the microphysical domain, such that all happenings at the macro level – the level at which we experience the world – are ideally described by happenings at the micro level. In this view, even if we cannot provide an account of our conscious experience in terms of electrons, our experience comes down to the movement of electrons.
What’s more, activity at the micro level is ultimately deterministic: the movement of electrons that account for our conscious experience right now comes down to the movement of electrons a moment before, and the movement of electrons a moment before comes down to … the movement of electrons at the very beginning of the Universe. There is no accounting for true indeterminism in this view, nor for effects of scale. There is no room for autonomy or free will (or, at least, one way of thinking about free will), since all microphysical events have already been accounted for by prior microphysical events.
Yet, another worldview is now emerging that emphasises nonlinear dynamics and complex systems. Importantly, this worldview sets aside the assumptions of reductionism and micro-level determinism. In this vein, neuroscientists have begun to argue that the brain’s causal power cannot be reduced to small-scale brain activity. This provides room for a substantive self, with its own powers and properties, distinct from those of individual neurons or mere collections of neurons. (Note that whether it is truly a causal power or some other power, such as what the philosopher Carl Gillet calls ‘machretic determination’, is a complex question that I won’t be answering here.)
So what is a substantive self? It is obvious that to be a substantive self, one must have identifiable traits, separable from others – that is just what it means to be a self. But what are these identifiable traits? A common suggestion is to think of the self as identical to the body, since one’s body is (typically) separable from other bodies. But this won’t work as a complete account of the self because many bodily behaviours don’t belong to the self (eg, accidents and reflexes). In such cases, intention is used to identify the role of a self. So a better account of the self would define it in terms of its intentions – its interests, goals, desires and needs. These are central to a self.
So far, Dennett would likely nod in agreement – we all certainly have interests, goals, desires and needs. The controversial part is this: in my view, the collection of our interests, goals, desires and needs has a status independent of both its microphysical underpinnings and its microphysical past. This idea is derived from observations of human behaviour. Just about everyone has had the experience of speaking to someone who is responding, but ‘not really listening’. We easily distinguish behaviour that is automatic, such as a reflexive verbal response, from behaviour that is controlled. The difference between these forms of behaviour is that controlled behaviour takes account of a broader spectrum of interests. So there is a difference between a mere collection of interests, one of which is dominant at any one time, and a collection of interests that flexibly determines which interest is dominant. In the latter case, there is an entity present that is not present in the former case – the full set of interests.
This understanding of the self can account for the process of attention. As mentioned above, attention is informed both by your current task and by new stimuli that might appear. ‘Top-down attention’ refers to your ability to direct and maintain focus according to your current goals and interests, whereas ‘bottom-up’ attention directs your focus to new and different stimuli. Your top-down attention might help you to focus on this article, while your bottom-up attention might urge you to focus on a conversation nearby. In the cognitive sciences, these are treated as separable, interacting processes. So what accounts for your ability to balance these forces? How are you able to stay on task and resist the pull of new and interesting stimuli, such as the conversation nearby? In my view, the determination of whether and when to stay focused on a current task versus switching focus to a new stimulus is best explained as directed by a substantive self. This is because the substantive self is more than the current task, incorporating the organism’s full set of interests. So the substantive self will be best able to balance the current task with other potential interests.
That’s why, instead of seeing our behaviour as determined by our interests, and our interests as determined by a mix of genes and environment, I see it this way: our behaviour is determined by the self, or our full set of interests working together, which is not determined by the individual interests or the mere sum of those individual interests. In other words, it is not just your love of udon, poetry or tiger lilies that makes you a self – it is the collection of these and other interests, all working together to guide your behaviour.
Specifically, a collection of interests becomes a substantive self at the moment it exerts control over its component interests. The need for such control comes from constraints faced by the collection that are not faced by its components – the competition for resources that are shared across the components. The resolution of this competition is attention. So, in my view, the self comes into being with the first act of attention, or the first time attention favours one interest over another. This will occur when we have multiple interests, two or more of which are in conflict. At the very moment attention resolves such a conflict, the self is born.
This view of the substantive self need not fall prey to the ‘homunculus fallacy’, in which we explain a phenomenon by introducing a homunculus, which then must also be explained by introducing a new homunculus, and so on. Instead, my understanding of a substantive self is as a physically realised emergent phenomenon – it is made up of parts but it has a property that goes beyond the sum of its parts, in that it has some degree of power or control over its parts. This power might be simply to increase the influence of some parts (eg, goals or interests) at the expense of others, in keeping with the needs and capacities of the whole.
Further, this substantive self can exist even if our experience of the self is a construct. We might not be able to directly experience the substantive self, as the philosopher Jesse Prinz argues. In that case, our experience of the self could actually be a model of the self that we have constructed based on what we infer to be its role. The neuroscientist Michael Graziano argues that all of our experiences depend on a model in the brain. In that case, we would expect our experience of the self to depend on such a model, even if the self is more than a mere model. And, like all models, our experience of the self might sometimes get it wrong, leading to the illusions of control detected by Wegner and others. (Another explanation of these illusions is that they derive from errors in judgment, rather than experience. That is, our ability to accurately describe our own experience, rather than our experience itself, might be to blame.)
Importantly, this view of the self accounts for one aspect of self-forming actions left unexplained by merely constructed selves. Recall that self-forming actions are both formed by the self and form the self. The Frostian Concern allows us to see how agents later explain their actions to themselves by constructing a narrative, and how the self might be no more than the centre of that narrative. So this explains how self-forming actions can come to form the ‘self’ without the existence of a substantive self. But this view of the self would not account for Kane’s contention that it is the self that drives the conflict in the first place, through effort – for Kane, it is by effortfully attending to two conflicting options at the same time that the self provides space for indeterminism. It might be an illusion that our attention in such matters is (at least in part) up to us, but we have another, I think better, option: it’s not an illusion, because attention is controlled (at least in part) by a substantive self.
Our interests, goals, desires and needs interact with one another much like birds interact in a flock
Yet, my view is not committed to Kane’s idea that a self exists only if it takes part in these indeterministic self-forming actions. In my picture, which might be closer to that of the philosopher Timothy O’Connor, the self is created the moment attention is first active, regardless of whether that moment is brought about through deterministic or indeterministic processes. What provides space for the self is not indeterminism. Instead, the self has a status independent of microphysical particles, both past and present, because it is an emergent entity that has powers beyond those of its parts, and so cannot be reduced to those parts. Further, it has a status independent of other macro-level objects because it depends on the grouping of physically bounded microphysical particles, and that grouping is unique to that living organism.
I came to this view – that attention comes about due to the interaction between our interests and the resources shared by those interests as a whole – by thinking about flocking behaviour. That is, our interests, goals, desires and needs interact with one another much like birds interact in a flock. Yet, birds also interact with the shared environment of that flock (eg, gusts of wind), which provides special constraints to the flock as a whole. The interactions of the flock with its environment can create beautiful patterns, known to bird watchers as ‘murmurations’. Similarly, the interactions of our interests and the resources shared by those interests can result in real, observable patterns. These patterns reflect the enhancement of some interests at the cost of others. That is, our interests, as a group, lead to changes in our interests, as members of that group.
How might this work in the brain? One possibility is that it relies on a phenomenon much like the synchronisation of metronomes. If you place several metronomes on a table, and start them at different places, they will eventually synchronise. This is because the metronomes share the table, in which the various oscillations cumulate (ie, forces in opposing directions cancel out, unlike forces in the same direction), leading to an overall push in a specific direction at a specific time. For neurons, it might be a shared electromagnetic field, bound by the meninges and skull, rather than a shared table, that allows for synchronisation. In this case, the electromagnetic field would be a resource shared by the neurons, and patterns of synchronisation within that field would reflect the division of this resource based on the whole set of neurons.
This is a mere ‘how is it possible’ account of the substantive self, and over time it might be shown to be inconsistent with either reason or empirical evidence. Yet, at this time, no reason or evidence exists that I know of that would counsel against this account. Further, a substantive self, as described here, would help to make sense of certain features of attention, discussed above. Thus, I see no reason to reject the existence of a substantive self.
But note that, in my picture, the self is only as strong as its powers of attention. While this might be an uncomfortable idea to some, I take it that it is preferable to losing the self altogether. And now, in the words of Galen Strawson, it is your move…

golden age: 15-hour working week

Weekend reads
 
Saturday 13 June 2020




Automation and roboticsEditors’ pick
The golden age
The 15-hour working week predicted by Keynes may soon be within our grasp – but are we ready for freedom from toil?
by John Quiggin



The golden age

The 15-hour working week predicted by Keynes may soon be within our grasp – but are we ready for freedom from toil?



Leisure society: tourists at the Tahiti Motel swimming pool in Wildwood, New Jersey, 1960s. Photo by Aladdin Color, Inc/Corbis
John Quiggin
is professor of economics at the University of Queensland in Brisbane. He is the author of Zombie Economics  (2010), and his latest book is Economics in Two Lessons: Why Markets Work So Well, and Why They Can Fail So Badly (forthcoming, 2019).
5,500 words
Edited by Ed Lake

I first became an economist in the early 1970s, at a time when revolutionary change still seemed like an imminent possibility and when utopian ideas were everywhere, exemplified by the Situationist slogan of 1968: ‘Be realistic. Demand the impossible.’ Preferring to think in terms of the possible I was much influenced by an essay called ‘Economic Possibilities for our Grandchildren,’ written in 1930 by John Maynard Keynes, the great economist whose ideas still dominated economic policymaking at the time.
Like the rest of Keynes’s work, the essay ceased to be discussed very much during the decades of free-market liberalism that led up to the global financial crisis of 2007 and the ensuing depression, through which most of the developed world is still struggling. And, also like the rest of Keynes’s work, this essay has enjoyed a revival of interest in recent years, promoted most notably by the Keynes biographer Robert Skidelsky and his son Edward.
The Skidelskys have revived Keynes’s case for leisure, in the sense of time free to use as we please, as opposed to idleness. As they point out, their argument draws on a tradition that goes back to the ancients. But Keynes offered something quite new: the idea that leisure could be an option for all, not merely for an aristocratic minority.
Writing at a time of deep economic depression, Keynes argued that technological progress offered the path to a bright future. In the long run, he said, humanity could solve the economic problem of scarcity and do away with the need to work in order to live. That in turn implied that we would be free to discard ‘all kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital’.
Keynes was drawing on a long tradition but offering a new twist. The idea of a utopian golden age in which abundance replaces scarcity and the world is no longer ruled by money has always been with us. What was new in Keynes was the idea that technological progress might make utopia a reality rather than merely a vision.
Traditionally, the golden age was located in the past. In the Christian world, it was the Garden of Eden before the Fall, when Adam was cursed to earn his bread with the sweat of his brow, and Eve to bring forth her children in sorrow. The absence of any discussion of the feasibility of an actual golden age was unsurprising. As Keynes observed in his essay, ‘From the earliest times of which we have record — back, say, to 2,000 years before Christ — down to the beginning of the 18th century, there was no very great change in the standard of life of the average man living in the civilised centres of the earth’. The vast majority of people lived lives of hard labour on the edge of subsistence, and had always done so. No feasible political change seemed likely to alter this reality.
It was only with the Industrial Revolution, and the Enlightenment that preceded it, that the idea of a future golden age, realised as a result of human action, began to seem possible. By the end of the 18th century incomes had risen to the point where radical thinkers such as William Godwin could propose that, with a just distribution of wealth, everyone could live well.
The novel idea of progress — that the natural tendency of human affairs was to get better rather than worse — became part of ‘common sense’
Such dangerous speculation led to the first and still the most notable defence of the inevitability of scarcity, Malthus’s ‘Essay on the Principle of Population’, written specifically to refute Godwin. Malthus argued that, even if a technological innovation or redistribution of wealth could improve the living standards of the masses, the result would simply be to allow more children to survive. Inevitably, the exponential growth of population would outstrip linear growth in the means of subsistence. In a short time, the poor would be poor once again.
In the initial presentation of his argument, Malthus admitted only two checks on population — misery and vice. Misery meant poverty and hunger. Vice meant contraception, to which Malthus, unlike his neo-Malthusian successors, was resolutely opposed. Although he later admitted the third option of ‘moral restraint’ (that is, sexual abstinence), he was comfortably assured that this would never be sufficient to undermine his argument. Thus he concluded that the maintenance of a small upper class (clergymen, for example), with leisure to preserve, extend and transmit culture, was the best that humanity could hope for.

Linear growth? Fruit processing in Hawaii, 1960s. Factories drove up both working hours and living standards. Photo by Bates Littlehales/National Geographic/Getty
The conditions of the early 19th century seemed to support Malthus’s case. The Industrial Revolution had produced an intensification of work that was almost unparalleled in human history. Driven off the land by enclosure acts and population growth, former peasants and agricultural labourers became the first industrial proletariat. The factories in which they worked rapidly drove old traders and cottage industries like that of the handloom weavers into destitution and then into oblivion.
Unconstrained by seasons or by the length of the day, working hours reached an all-time peak, with the number of hours worked estimated at over 3,200 per year — a working week of more than 60 hours, with no holidays or time off. There were small increases in material consumption, but not nearly enough to offset the growth in the duration and intensity of work.
Most economists of Malthus’s time agreed with him. All the standard models ended in a steady state, with the majority of the population at subsistence. The only important exception was Karl Marx, for whom the process of immiseration ended, not with a subsistence-level steady state, but with crisis and revolution.
By the late 19th century, things had changed. On the one hand, Malthus’s predictions were being falsified in practice. A growing middle class was enjoying improved living standards as a result of technological progress. And, whether through moral restraint or contraception, they were having smaller families. The relatively novel idea of progress — that the natural tendency of human affairs was to get better rather than worse — rapidly became part of ‘common sense’.
The working class had more compelling reasons to hope for better things. Over decades of struggle, workers clawed back the ground they had lost and then some. The Factory Acts outlawed child labour in Britain, and by 1870 all children in England and Wales were entitled to at least an elementary education. The hours of work were limited by legislation and union action. The eight-hour day, a norm that is still under challenge 150 years later, was first achieved by Melbourne stonemasons in 1855, though it was not established more generally, even in Australia, until the early 20th century. The weekend, making Saturday as well as Sunday a day of leisure, came even later, around the middle of the 20th century in most developed countries.
The idea that a combination of technological progress and political reform could produce a genuine utopia became an appealing alternative to the ‘pie in the sky’ of an afterlife. Edward Bellamy’s Looking Backward (1888), a critique of 19th century capitalism written from the imagined perspective of the year 2000, was the archetypal example of this literature. Oscar Wilde’s ‘The Soul of Man under Socialism’ (1891) was perhaps the most appealing. Even Marx, sternest critic of the old utopians, had his moments, most notably in The German Ideology (1846). There, he and Engels looked forward to a society in which labour did not depend on the lash of monetary incentives:
For as soon as the distribution of labour comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a herdsman, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.
None of these writers, however, had a theory of economic growth. Neither was one to be found in the literature of classical economics. Keynes’s discussion of economic possibilities was one of the first to spell out the argument that improvements in living standards, based on a combination of technological progress and capital accumulation, might be expected to continue indefinitely.
He argued that technological progress at a rate of two per cent per year would be sufficient to multiply our productive capacity nearly eightfold in the space of a century. Allowing for a doubling of output per person, that would be consistent with a reduction of working hours to 15 hours a week or even less. This, Keynes thought, would be sufficient to satisfy the ‘old Adam’ in us who needs work in order to be contented.
Keynes himself had no grandchildren, but he was a contemporary of my own grandparents. It seemed to me when I first read his essay that there was a good chance that his vision might be realised in my lifetime. The social democratic welfare state, supported by Keynesian macroeconomic management, had already smoothed many of the sharp edges of economic life. The ever-present threat that we might be reduced to poverty by unemployment, illness or old age had disappeared from the lives of most people in developed countries. It wasn’t even a memory for the young.
There was, it seemed, every reason to expect further progress towards Keynes’s vision. Working hours were decreasing. A comfortable retirement at or before 65 had become a normal expectation. The idea of a lengthy and fairly leisurely university education was increasingly accepted, even if access to higher education was far from universal. More generally, in a labour market where the number of vacancies routinely exceeded the number of jobseekers, responding to economic ‘rewards and penalties’ seemed much less urgent. If one job was unsatisfying or boring, it was a simple matter to quit, take some time off and then find another.
In these favourable conditions, anti-materialist attitudes that had been confined to a Bloomsbury elite in Keynes’s day became widespread, particularly among the young. The enthusiastic consumerism of the 1950s was repudiated in varying degrees by nearly everyone, a trend exemplified by the adoption of blue jeans, previously the cheap and durable everyday wear of unskilled workers. The idea of ‘the environment’ as a problem of more general concern than specific local issues such as air pollution and the preservation of national parks was also a product of the ’60s, book-ended by Rachel Carson’s Silent Spring (1962) and the first Earth Day in 1970. The idea that we could continue on a path of ever-growing material consumption appeared to be not merely unsatisfying but a recipe for ultimate catastrophe.
So on a first reading, ‘Economic Possibilities for our Grandchildren’ seemed prophetic. Yet, 40 or so years later, I am a grandparent myself, the year 2030 is rapidly approaching, and Keynes’s vision seems further from reality than ever. At least in the English-speaking world, the seemingly inevitable progress towards shorter working hours has halted. For many workers it has gone into reverse.
The situation in Europe was, until recently, very different. Germany’s work hours declined from 2,387 hours annually in 1950 to 1,408 in 2010. France’s declined from 2,241 hours annually in 1950 to 1,552 in 2010. Yet even here, and even before the advent of austerity, there were signs of a turnaround. The loi Aubry, the law which reduced the normal French working week to 35 hours, has been repeatedly weakened. Work-sharing in Germany was highly successful in reducing the impact of the global financial crisis, but that does not seem to have had much effect on German judgments about the desirability of more and harder work for other countries.

Have allowances of free time peaked? A worker at the IRS center in Ogden, USA, 1980s. Photo by Roger Ressmeyer/Corbis
Moreover, far from fading into irrelevance, the struggle to accumulate capital and maintain or increase consumption is more intense than ever. Instead of contracting, the values of the market have penetrated ever further into every aspect of our lives. During the decades leading up to the global financial crisis, the scope and scale of speculative markets grew beyond any conceivable bound. Avarice and usury, as Keynes called them, are worshipped on an unimaginable scale. Financial instruments with notional values in the trillions were routinely traded, creating immense wealth for some (mostly participants in the trade) while bringing ruin and destitution to others (mostly far removed from the scene of the action).
Particularly during the ’90s, it seemed that this wealth was there to be taken by anyone willing to focus their thoughts on financial enrichment at the expense of any broader goals in life. Now that the bubble has burst, the burden of unsustainable debt left behind for both households and governments has ensured that the gods of the marketplace maintain their pre-eminence, even if their worship is much less enthusiastic than before.
How did this reversal come about, and is there any possibility that Keynes’s vision will be realised?
The first of these questions is easily answered. The economic turmoil of the ’70s put an end to the utopianism of the ’60s, and resulted in the resurgence of a hard-edged version of capitalism, variously referred to as neoliberalism, Thatcherism and the Washington Consensus. I have used the more neutral term ‘market liberalism’ to describe this set of ideas.
Social democracy must offer more than a lever to stabilise the economy. We need a vision of a genuinely better society
The central theoretical tenet of market liberalism is the efficient (financial) markets hypothesis. In the strong form that is most relevant to policy decisions, the hypothesis states that the prices determined in markets for financial assets such as shares, bonds and their various derivatives are the best possible estimates of the value of those assets.
In the core ideology of market liberalism, the efficient markets hypothesis is combined with the claim that the best way to achieve prosperity for all is to let the rich get richer. This claim is rarely spelt out explicitly by its advocates, so it is best known by its derisive label, the ‘trickle down’ hypothesis.
Taken together, the efficient markets hypothesis and the trickle down hypothesis lead us in the opposite direction to the one envisaged by Keynes. If these hypotheses are true, the mega-fortunes piled up in speculative financial markets are not merely justified: they are essential to achieve and maintain decent living standards for the rest of us. The investments that generate technological progress will, on this view, only be made if they are guided by financial markets driven by the desire to make unimaginable fortunes.
As long as market liberalism rules, there is no reason to expect progress towards a less money-driven society. The global financial crisis and the subsequent long recession have fatally discredited its ideas. Nevertheless, the reflexes and assumptions developed under market liberalism continue to dominate the thinking of politicians and opinion leaders. In my book, Zombie Economics (2010), I describe how these dead, or rather undead, ideas have risen from their graves to do yet more damage. In particular, after a resurgence of interest in Keynes’s macroeconomic theory, the entrenched interests and ideas of the era of market liberalism have regained control, pushing disastrous policies of ‘austerity’ and yet more structural ‘reform’ on free-market lines. Social democratic parties have failed to put up any serious resistance so far. Popular anger at the crisis has been channelled into right-wing tribalist movements such as the Tea Party in the US and Golden Dawn in Greece.
This experience makes it clear that, if Keynesian social democracy is to regain the dominant position it held from the end of Keynes’s own lifetime until the ’70s, it must offer more than a technocratic lever to stabilise the economy. We need a vision of a genuinely better society. For this reason, the time is right to re-examine Keynes’s vision of a future where economic scarcity, real or perceived, no longer dominates life as it does today.
To begin with, it is important to consider the limitations of Keynes’s thinking. First, Keynes considered only the developed world, implicitly assuming that the colonialist world order could be sustained indefinitely. Judging from his other writing, including his early work on the Indian economy, Keynes envisaged a gradual increase in living standards, under colonial tutelage, for the poor countries. The idea that a post-scarcity society in Europe and its settler offshoots could coexist with mass poverty elsewhere seems incongruous now, but in 1930, the European empires seemed destined to endure for a long time. The Indian National Congress had declared its goal of independence only the previous year, and the Statute of Westminster, establishing the legislative independence of the settler dominions, was a year in the future.
Once we try to apply Keynes’s reasoning to the world as a whole, it’s clear that the end of scarcity is further away than he supposed. How much further? To be more precise, how much technological progress would be needed for everyone to enjoy the average standard of living of Britain in 1930 (when Keynes was writing) by working only 15 hours a week?
For the first time in history, our productive capacity is such that no one need be poor
By 1990, 60 years after Keynes’s essay, average income for the world as a whole had just reached Britain’s level in 1930. So, it seems we need to add another 60 years, or two generations, to his timescale. On the other hand, because developing countries are mostly adopting existing technology, the average world growth rate of income per person is around three per cent, not the two per cent proposed by Keynes. In that case, an eightfold increase would take only 70 years. So, taking the entire world into account only defers the estimated end of scarcity by 30 years, to 2060 — within the expected lifetime of my children.
The problem of distribution, sharp enough in the Britain of the ’30s, is far worse for the world as a whole. A billion or so people live in destitution, and billions more are poor by any reasonable standard. Nevertheless, for the first time in history, our productive capacity is such that no one need be poor. In fact, more people are rich, by any reasonable historical standard, than are poor.
Even more strikingly, perhaps, more people are obese than are undernourished. And this is not true merely in terms of basic nutrition. Right now, the world produces enough meat to give everyone a diet comparable to the average Japanese person’s. This amount could be increased by replacing grain-fed beef with chicken and pork, a step that would also reduce carbon emissions. With another 50 years of technological progress and even a modest effort to aid the poorest onto the path of rapid growth already being followed by most of Asia, poverty could be eliminated. The vast majority of the world’s population could enjoy a living standard comparable, in material terms, to that of the global middle class of today.
A second problem to which Keynes pays only passing attention is that of housework. As a male academic born into a household staffed with domestic servants, he almost certainly did none himself. His discussion reflects this. Looking forward to the problems that might arise in a society with unaccustomed leisure, Keynes mentions ‘the wives of the well-to-do classes’ who ‘cannot find it sufficiently amusing, when deprived of the spur of economic necessity, to cook and clean and mend, yet are quite unable to find anything more amusing’. These traditional tasks had not, of course, been eliminated by technological progress. Rather, they had been contracted out to others, typified by the charwoman in a song quoted by Keynes, whose hope for paradise was to do nothing for all eternity.
Some housework is enjoyable and fulfilling but much of it is drudgery. A central requirement for a post-scarcity society is that no one should have to spend a lot of time on the latter.
The household appliances that first came into widespread use in the ’50s (washing machines, vacuum cleaners, dishwashers and so on) eliminated a huge amount of housework, much of it pure drudgery. By contrast, technological progress for the next 40 years or so was limited. Arguably, the only significant innovation in this period was the microwave oven. As a result, housework alone takes up all of Keynes’ proposed 15 hours a week. Time-use surveys suggest that the average woman in the UK spends around three hours a day on household work (excluding childcare, of which more later) and the average man spends about two hours. Both of these numbers have declined over time, but only slowly.
Market alternatives to most kinds of housework are available. Cooking can be replaced by eating out, washing and ironing can be sent out to a laundry, and (low-paid) workers can be hired to clean houses. Obviously, while people are being paid to do the housework of others, we are a long distance from Keynes’s post-scarcity world. A little less obviously, such a situation demands more time spent in paid work from those who want the money to buy market alternatives.
We might be willing to support surfers in return for non-market contributions to society
Still, the time spent on housework has been falling, and there are good reasons to think that it can fall further, to the point where most housework is done by choice rather than necessity. The rise of the internet and the advent of mobile telephony have drastically simplified a wide range of household chores, from banking and bill-paying to dealing with tradespeople. At the same time, the online world is changing shopping from a necessity to an optional extra, pursued only by those who enjoy it. It allows the requirements for a decent life to be met without any significant interaction with the culture of consumption, exemplified by the shopping mall.
An even more important omission in Keynes’s essay is the effort involved in raising children. Childless himself, Keynes came from a social class in which child rearing was contracted out, to an extent unparalleled before or since. Babies were handed to wet-nurses, cared for by nannies and governesses and then, from the age of eight or even younger, packed off to boarding schools. From the perspective of today’s parents, such a world is hard to imagine. Even if the need for market work were to disappear altogether, parents of young children would not have much time to worry about the need to fill their leisure hours.
But far from weakening Keynes’s case against a money-driven society, the problems of caring for children illustrate the way in which our current economic order fails to deliver a good life, even for the groups who are doing relatively well in economic terms. The workplace structures that define a successful career today require the most labour from ‘prime-age’ workers aged between 25 and 50, the stage when the demands of caring for children are greatest.

For the first time in history the world produces enough food so that none need go hungry: yet we are far from solving the problem of fair distribution. Hot dogs on Puget Sound, 1960s. Photo by Merle Severy/National Geographic/Getty
Work is distributed unequally, and perversely, in other dimensions as well. And yet, in the English-speaking countries at least, this has not meant more leisure so much as more time in retirement, unemployment or otherwise involuntarily excluded from the labour force. The result has been an inequality of leisure, the counterpart to the growing inequality of income. Particularly in the US, families are becoming polarised. On the one hand there is the two-income class of economically successful couple households in which both partners work full-time or more. On the other is the zero-income class, with one or two adults dependent either on welfare benefits or else on intermittent and insecure low-wage employment.
If work was distributed more equally, both between households and over time, we could all be better off. But it seems impossible to achieve this without a substantial reduction in the centrality of market work to the achievement of a good life, and without a substantial reduction in the total hours of work.
The first step would be to go back to the social democratic agenda associated with postwar Keynesianism. Although that agenda has largely been on hold during the decades of market-liberal dominance, the key institutions of the welfare state have remained both popular and resilient, as shown by the wave of popular resistance to cuts imposed in the name of austerity.
Key elements of the social democratic agenda include a guaranteed minimum income, more generous parental leave, and expanded provision of health, education and other social services. The gradual implementation of this agenda would not bring us to the utopia envisaged by Keynes — among other things, those services would require the labour of teachers, doctors, nurses, and other workers. But it would produce a society in which even those who did not work, whether by choice or incapacity, could enjoy a decent, if modest, lifestyle, and where the benefits of technological progress were devoted to improving the quality of life rather than providing more material goods and services. A society with these priorities would allocate most investment according to judgments of social need rather than market signals of price and profit. That in turn would reduce the need for a large and highly rewarded financial sector, even in relation to private investment.
There remains the question of how to move from a revitalised social democracy to the kind of utopia envisaged by Keynes. It would be absurd to spell out a detailed transitional program, but it’s useful to think about one of the central elements of such a society — a guaranteed minimum income.
In one sense, a guaranteed minimum income involves little more than a re-labelling of the existing benefits provided by all modern welfare states (with the US, as always, a notable exception). In most modern welfare states, everyone is eligible for income support which should be sufficient to prevent them from falling into poverty. Those who cannot work because of age or disability are automatically entitled to such support, while unemployed workers receive either insurance benefits related to their previous wages or some basic allowance conditional on job search.
In a post-scarcity society, everyone would be guaranteed an income that yielded a standard of living significantly better than poverty, and this guarantee would be unconditional. The move from a near-poverty benefit subject to eligibility conditions to a liveable, guaranteed minimum income would require both an increase in productivity, such that a smaller number of workers could produce an adequate income for all, and some fairly radical changes in social attitudes.
It seems clear enough that technological progress can generate the necessary productivity gains, so what is needed most is a change in attitudes to work that would make a guaranteed minimum income socially sustainable. The first is that the production of market goods and services needs to become pleasant enough that those doing it don’t mind supporting others who choose not to. The second is that the option of receiving a guaranteed minimum income does not become a trap, leading into the kind of idleness that produces despair.
We can imagine a few steps towards this goal. One would be to allow recipients of the minimum income to choose voluntary work as an alternative to job search. In many countries, a lot of the required structures are in placed under ‘workfare’ or ‘work for the dole’ schemes. All that would be needed is to replace the punitive and coercive aspects of these schemes with positive inducements. A further step would be to allow a focus on cultural or sporting endeavours, whether or not those endeavours involve achieving the levels of performance that currently attract (sometimes lavish) public and market support.
An Australian example might help to illustrate the point. Under our current economic structures, someone who makes and sells surfboards can earn a good income, as can someone good enough to join the professional surfing circuit. But a person who just wants to surf is condemned, rightly enough under our current social relations, as a parasitic drain on society. With less need for anyone to work long hours at unpleasant jobs, we might be more willing to support surfers in return for non-market contributions to society such as membership of a surf life-saving club. Ultimately, people would be free to choose how best to contribute ‘according to their abilities’ and receive from society enough to meet at least their basic needs.
We do have the technological capacity to start down that path and to approach the goal within the lives of our grandchildren. That’s a couple of generations behind Keynes’s optimistic projection, but still a hope that could counter the current tides of cynicism and despair.
This brings us to the final, really big question. Supposing a Keynesian utopia is feasible, will we want it? Or will we prefer to keep chasing after money to buy more and better things?
In 2008, 16 economists contributed to an interesting volume called Revisiting Keynes, edited by Lorenzo Pecchi and Gustavo Piga. Many of those economists argued that Keynes had been proved wrong. Experience, they said, had shown that people will always want to consume more and will be willing to work harder to do it. Implicit in much of their discussion was the idea that the US economy, as of 2008, represented the way of the future. With the advantage of a few years’ hindsight, this assumption seems every bit as dubious as the view against which Keynes argued in 1930, that the Depression would continue indefinitely.
The steady growth in consumption expenditure in the US in the decades leading up to the financial crisis depended on debt. And of course, the need to service debt necessitated a willingness to work long hours. Now, after millions of foreclosures and bankruptcies, a large proportion of the population has been excluded from credit markets. Households in general have seen the need to build up their savings.
More importantly, the culture of conspicuous consumption, which reached unparalleled heights of excess in the 1990s and early 2000s, is on the wane. The most striking emblem of this change is the end of the American love affair with the motor car. Throughout the 20th century the car stood in American culture as a symbol of personal freedom attainable through consumption expenditure. Year after year, pausing only briefly for recessions and slowdowns, more and more cars were driven further and further, burning more and more petrol. But this endless growth has now, apparently, come to an end. The use of petrol in the US peaked in 2005, before the advent of the economic crisis. The distance driven has also peaked and Americans are buying fewer and smaller cars. Economic factors, including higher fuel prices, have a role to play. But anecdotal evidence suggests that there is more to it than this. Increasingly, driving is seen as an unpleasant chore rather than an exercise of freedom. Young people in particular have been less eager than their parents to start driving and acquire cars.
Such shifts bring bigger changes in their wake. Without cars and commuting, large houses in the suburbs are much less attractive. After decades of steady growth, the size of new houses seems to be declining. Smaller houses mean fewer possessions to fill them, and less appeal for a privatised life based on private consumption.
An escape from what Keynes called ‘the tunnel of economic necessity’ is still open to us. Yet it will require radical changes in the economic structures that drive the chase for money and in the attitudes shaped by a culture of consumption. After decades of finance-driven capitalism, it takes an effort to recall that such changes ever seemed possible.
Yet it is now clear that market liberalism has failed in its own terms. It promised that if markets were set free, everyone would benefit in the long run. In reality, most households in developed countries experienced less income growth under market liberalism than in the decades of Keynesian social democracy after 1945. Of more immediate importance, except for the top one per cent there has been no recovery from the crisis of 2008, and even worse looms ahead. And despite the initial success of the backlash against Keynesian macroeconomic policies, austerity is now failing in political as well as economic terms.
Popular anger has boiled over in a string of electoral defeats for the advocates of austerity. But, unlike the right-wing tribalism that has formed part of that backlash, progressive politics cannot, in the end, rely on anger. It must offer the hope of a better life. That means reclaiming utopian visions such as that of Keynes.