As a species, we humans are awfully obsessed with the future. We love to speculate about where our evolution is taking us. We try to imagine what our technology will be like decades or centuries from now. And we fantasise about encountering intelligent aliens – generally, ones who are far more advanced than we are. Lately those strands have begun to merge. From the evolution side, a number of futurists are predicting the singularity: a time when computers will soon become powerful enough to simulate human consciousness, or absorb it entirely. In parallel, some visionaries propose that any intelligent life we encounter in the rest of the Universe is more likely to be machine-based, rather than humanoid meat-bags such as ourselves.
These ruminations offer a potential solution to the long-debated Fermi Paradox: the seeming absence of intelligent alien life swarming around us, despite the fact that such life seems possible. If machine intelligence is the inevitable end-point of both technology and biology, then perhaps the aliens are hyper-evolved machines so off-the-charts advanced, so far removed from familiar biological forms, that we wouldn’t recognise them if we saw them. Similarly, we can imagine that interstellar machine communication would be so optimised and well-encrypted as to be indistinguishable from noise. In this view, the seeming absence of intelligent life in the cosmos might be an illusion brought about by our own inadequacies.
There is also a deeper message laid bare within our futurist projections. Our notions about the emergence of intelligent machines expose our fantasies (often unspoken) about what perfection is: not soft and biological, like our current selves, but hard, digital and almost inconceivably powerful. To some people, such a future is one of hope and elevation. To others, it is one of fear and subjugation. Either way, it assumes that machines sit at the pinnacle of the evolution of consciousness.
Superficially, the logic behind the conjectures about cosmic machine intelligence appears pretty solid. Extrapolating the trajectory of our own current technological evolution suggests that with enough computational sophistication on hand, the capacity and capability of our biological minds and bodies could become less and less attractive. At a certain point we’d want to hop into new receptacles, custom-built to suit whatever takes our fancy. Similarly, that technological arc could take us to a place where we’ll create artificial intelligences that are either indifferent to us, or that will overtake and subsume (or simply squish) us.
Biology is not up to the task of sustaining pan-stellar civilisations or the far-future human civilisation, the argument goes. The environmental and temporal challenges of space exploration are huge. Any realistic impetus to become an interstellar species might demand robust machines, not delicate protein complexes with fairly pathetic use-by dates. A machine might live forever and copy itself perfectly, unencumbered by the error-prone flexibility of natural evolution. Self-designing life forms could also tailor themselves to very specific environments. In a single generation they could adapt to the great gulfs of time and space between the stars, or to the environments of alien worlds.
Pull all of these pieces together and it can certainly seem that the human blueprint is a blip, a quickly passing phase. People take this analysis seriously enough that influential figures such as Elon Musk and Stephen Hawking have publicly warned about the dangers of all-consuming artificial intelligence. At the same time, the computer scientist Ray Kurzweil has made a big splash from books and conferences that preview an impending singularity. But are living things really compelled to become ever-smarter and more robust? And is biological intelligence really a universal dead-end, destined to give way to machine supremacy?
Perhaps not. There is quite a bit more to the story.
Sign up for Aeon’s Newsletter
Fashionable descriptions of the inevitable triumph of machine intelligence contain many critical biases and assumptions that could derail them from turning into reality. It is far from clear that current computational technology is leading us to the singularity, or any grandiose moment of exponential transcendence as a species. (There go my lucrative speaking deals at tech conferences.) All the same, the future may still be fascinating.
Some of these extravagant ideas can be traced back to John von Neumann’s astonishing conjectures on self-replicating automata, which were compiled in his posthumous book,
Theory of Self-Reproducing Automata (1966). That work helped cement the concept of machines building more machines, in an exponential and perhaps uncontained explosion that could simply swamp other life forms that get in the way. Von Neumann also considered how such machines could simulate some of the functions and actions of human neurons.
In the years since then, electronic connectivity certainly has had a huge impact on the way that many humans go about their daily lives, and even on the way in which we problem-solve and think about any new question or challenge. Who of us in the connected modern world hasn’t Googled a question before even trying to work through the answer, or before asking another human being? Part of our collective wisdom is now uploaded, placed in an omnipresent cloud of data. The relative importance of our individual breadth of knowledge might be declining. It might even be the case that the importance of individual expertise – specialisation – is lessening in the process.
Where that’s taking us to isn’t obvious, however. If anything, we could be heading for a hive-mind state, a collective organism more akin to a termite colony or a set of squirmy naked mole-rats. Rather than increasing our intelligence, we might actually be throttling the raw inputs, training ourselves to be increasingly passive. A pessimist might see our minds stalling out, becoming part of a self-referencing swarm rather than a set of exponentially improving geniuses.
There might be an upcoming ‘wall’ of energy efficiency for conventional processing architectures
History also teaches us that it is nigh-impossible to foresee the long-term impacts of disruptive technologies. As a crude example, the invention of the continuous rotary steam engine in the late 1700s upturned the human landscape. It had not been predicted. Nor did anyone soon predict that internal combustion and electricity would make those same steam engines effectively obsolete barely 150 years later. Nor was anyone quick to recognise that all of this hydrocarbon combustion might seriously harm our species by altering the composition of the Earth’s atmosphere.
There is also no citable evidence to suggest that our particular brand of intelligence is more than a quirky outcome of billions of years of evolution, much less that it’s somehow optimal on a cosmic landscape. (To be fair, there’s no good evidence to the contrary either – no data indicating that we really are a quirk.) The upshot is that extrapolating our experience of consciousness and intelligence in order to propose any specifics about the state of alien intelligence and its motivations – its agency – is an awfully hard thing to do.
This line of argument sounds like the ultimate downer: we might be getting stupider, we cannot predict our future path, and we have no idea what kinds of intelligent beings (if any) exist in the cosmos beyond. But I maintain that there is a silver lining, because this very act of self-examination forces us to confront some harsh, but fascinating, realities about our culture and our technology.
One such reality is the issue of energetics – a topic discussed by Von Neumann, but often ignored in the futurist conversations. In computer design, a key factor is computational capacity versus energy use, sometimes quoted as computations-per-joule. As microprocessors get more complex, and silicon-based architectures get smaller and smaller (these days, to the tens-of-nanometre scales), efficiency is still improving. As a result, the computations-per-joule ratio has been getting better and better with each passing year.
Except, that ratio has been getting better by less and less with each passing year. In fact, some researchers have stated that there might be an upcoming ‘wall’ of energy efficiency for conventional processing architectures, somewhere around 10 giga-computations-per-joule for operations such as basic multiplication.
That’s a big potential roadblock for any quest for true artificial intelligence or brain-uploading machinery. Estimates of what you’d need in terms of computing power to approach the oomph of a human brain (measured by speed and complexity of operations) come with an energy efficiency budget that needs to be about a billion times better than that wall.
To put that in a different context, our brains use energy at a rate of about 20 watts. If you wanted to upload yourself intact into a machine using current computing technology, you’d need a power supply roughly the same as that generated by the Three Gorges Dam hydroelectric plant in China, the biggest in the world. To take our species, all 7.3 billion living minds, to machine form would require an energy flow of at least 140,000 petawatts. That’s about 800 times the total solar power hitting the top of Earth’s atmosphere. Clearly human transcendence might be a way off.
One possible solution is to turn to so-called
neuromorphic architectures, silicon designs that mimic aspects of real biological neurons and their connectivity. Researchers such as Jennifer Hasler at the Georgia Institute of Technology have
suggested that, if done right, a neuromorphic system could reduce the energy requirements of a brain-like artificial system by at least four orders of magnitude. Unfortunately, that big leap would still leave a gaping hole in efficiency of a factor of 100,000 before reaching the level of a human brain.
Of course, the history of computer technology is replete with supposedly impenetrable barriers that collapse year by year, so optimism has not yet left the room. But the critical point is that none of this is a given. It might well be that, to capture the complexity, density and extraordinary efficiency of a modern human brain, silicon and its cousins are simply not the answer, no matter how they’re sculpted or stacked together.
A favourite alternative among techno-optimists is to invoke the possibility of quantum computation, which exploits the overlapping quantum states of atoms or systems in place of traditional computer transistors. Proponents suggest that the mind-bending computational capacity that state-superposition enables might solve the energy and speed problems, setting us on the path to building super-minds.
On paper, at least, a ‘universal’ or Turing quantum computer could exist with effectively boundless computational capacity. The British physicist David Deutsch articulated this idea brilliantly, and a little archly, in his
paper ‘Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer’ (1985). Notably, he left the details of how to accomplish such a feat as a problem for the reader to figure out.
A genuinely universal quantum computer could, in theory, simulate to any desired precision any finite physical system, including a mind, or other quantum computers for that matter. Going quantum could also allow simulations to be made massively parallel, and for probabilistic tests to be completed incredibly quickly. Despite enormous laboratory and theoretical progress in recent years, however, the practical realisation of such concepts is a very complicated challenge. Although there are aspects of proposed quantum computing applications, such as contextualised search, that might fit perfectly with ‘cognitive computing’ (the epitome of many current efforts in artificial intelligence), these are a long way from a truly intelligent artificial intelligence. And debate abounds on whether any kind of human-analog AI could work at all.
We probably live in a universe of the future of past intelligences
The problem of energy efficiency rears its head here, too. Manipulating the central currency of computation, the qubit – be it a cold atom or other quantum object – might require very little energy. But holding the components of a quantum computer in a state of coherence (with all those delicate quantum states delicately preserved) is enormously taxing, and can always rely on a host of support systems and engineering that will gobble up power. It’s not clear that we know even roughly what the real-world computation-to-energy function for quantum computing will be.
Other factors are equally worrisome. A quantum computer of ‘n’ qubits can carry out 2n computations in one cycle, but setting up those computations could also be a huge task of data flow. Simulating our entire Universe of about 1089 particles and photons might take only 296 qubits, by some calculations, but how on earth do you enter all 1089 initial conditions? Even more daunting, how do you pick the correct solutions from the quantum simulation? Simulating a human brain might be a bit easier, but you still have to quantify and initiate at least 1014 neural connections (the approximate number found in your head) to set up the computation. Presumably we would also want that quantum brain to have a very high throughput, a high-fidelity sensory interface with the surrounding world. That’s another unknown, possibly insurmountable challenge.
In fairness, I am oversimplifying the techniques and technological tricks that might be utilised. My own vision of the future might be far too limited. Nonetheless, I think that there is cause for a more measured response to the optimistic predictions of human-level AI. We need to admit that although the machinery to sustain intelligence comparable to, or exceeding, human intelligence might be possible to construct, it might not enable the kind of exponential computing growth that is often proposed.
In other words, the mathematics of exponentially improving machine intelligence could be sound, and yet the practical barriers could prove insurmountably steep.
To see where this might leave things, I’ll (a bit hypocritically) take a leaf out of the futurists’ book and do some wild extrapolation. I’d like to explore what happens if we meld the idea of slow growth for machine intelligence with the question of Fermi’s Paradox. Doing this is fun, but it’s also informative.
Let’s suppose that an advanced cosmic intelligence succeeds at converting itself to a machine form, or has been overtaken by its super-smart, but not exponentially better, machine creations. What happens next?
Because these machines are hemmed in by efficiency limits, there is a possibility that they’d end up looking at their past for new tricks to move forwards. One thing that they would know (as we already do) is that biology works, and it works extremely well. Some researchers estimate that the modern human brain is at its computational limits, but it might require only a slightly cleverer machine to re-engineer such a complex organ. In other words, there could be a more optimal trajectory that leads away from machines and back to biology, with its remarkable energy efficiency.
There is also no guarantee that machine intelligences will be, or can be, perfectly rational. To engage with a complex universe, where mathematics itself contains un-provable theorems, a touch of irrationality might be critical seasoning. Right now, we routinely speculate that the future of our intelligence lies in some other form, silicon or quantum perhaps, that we perceive to be superior to flesh. Perhaps the same theatre plays out for any intelligence. Machines might want to become biological again for practical reasons of energetics, or for other reasons that we cannot imagine or comprehend.
If life is common, and it regularly leads to intelligent forms, then we probably live in a universe of the future of past intelligences. The Universe is 13.8 billion years old and our galaxy is almost as ancient; stars and planets have been forming for most of the past 13 billion years. There is no compelling reason to think that the cosmos did nothing interesting in the 8 billion years or so before our solar system was born. Someday we might decide that the future of intelligence on Earth requires biology, not machine computation. Untold numbers of intelligences from billions of years ago might have already gone through that transition.
Those early intelligences could have long ago reached the point where they decided to transition back from machines to biology. If so, the Fermi Paradox returns: where are those aliens now? A simple answer is that they might be fenced in by the extreme difficulty of interstellar transit, especially for physical, biological beings. Perhaps the old minds are out there, but the cost of returning to biology was a return to isolation.
Any machine intelligence might already be dreaming of becoming biological again, returning to an islanded state in the great wash of interstellar space
Those early minds might have once built mega-structures and deployed cosmic engineering across the stars. Maybe some of that stuff is still out there, and perhaps we’re on the cusp of detecting some of it with our ever-improving astronomical devices. The recent excitement over KIC 8462852, a star whose brightness varies in a way that cannot be readily explained by known natural mechanisms, is founded on the recognition that our instruments are now sensitive enough to investigate such possibilities. Perhaps alien civilisations have retreated to a cloistered biological existence, with relics of their mechanical-era constructions crumbling under the rigours of cosmic radiation, evaporation, and explosive stellar filth.
Our current existence could sit in a cosmically brief gap between that first generation of machine intelligence and the next one. Any machine intelligence or transcendence elsewhere in the galaxy might be short-lived as an interstellar force; the last one might already be spent, and the next one might not yet have surfaced. It might not have had time to come visiting while modern humans have been here. It might already be dreaming of becoming biological again, returning to an islanded state in the great wash of interstellar space. Our own technological future might look like this – turning away from machine fantasies, back to a quieter but more efficient, organic existence.
There is no shame in admitting the highly speculative nature of these ideas, and there is something special about the questions that prompt them. We’re examining possible futures for ourselves. It is conceivable that the Universe is already telling us what those options really are. Such acts of self-examination are unlike any other human endeavour, and that alone is worth paying attention to.
SYNDICATE THIS ESSAY