February 2019


Paul Davies

What is life?

DNA model at the American Museum of Natural History, New York, 1964. © Bettmann / Getty Images

In search of a unified theory of everything

Seventy-five years ago, the distinguished physicist Erwin Schrödinger published a celebrated book entitled What Is Life? Despite dazzling advances in biology since, scientists still can’t agree on what life is or how it began. There is no doubt that living organisms are in a class apart, almost magical in their amazing properties. Yet they are made of normal matter. In the past few years, however, the secret of life is finally being revealed, and the missing link between matter and life comes from a totally unexpected direction. The discovery looks set to open up the next great frontier of science, with sweeping implications for technology and medicine. It also holds the tantalising promise of uncovering fundamentally new laws of nature.

Remarkably, What Is Life? appeared at the height of World War Two. Schrödinger had fled his native Austria to escape the Nazis and, after a brief sojourn in Oxford, settled in Dublin at the invitation of the prime minister, Éamon de Valera, accompanied by both his wife and mistress. Ireland was a neutral country, so Schrödinger felt free to pursue his academic work, unlike many of his scientific colleagues who assisted the Allies’ war effort. Schrödinger was best known as one of the founders of quantum mechanics, the most successful scientific theory ever. It explained at a stroke the properties of atoms, molecules, subatomic particles, nuclear reactions and the stability of stars. In practical terms, quantum mechanics has given us the laser, the transistor and the superconductor. For de Valera, Schrödinger was quite a catch.

Away from his normal environment, Schrödinger permitted himself to explore new interests, turning his attention to biology. Quantum mechanics is notorious for being hard to understand. Yet for all Schrödinger’s brilliance in crafting this esoteric branch of physics, the nature of life baffled him. Indeed, like many of his contemporaries, including Albert Einstein, he thought that understanding how life works was a much tougher proposition than understanding quantum mechanics.

The problem of what makes living organisms tick has puzzled some of the best minds in history. The philosopher Aristotle put his finger on a key property two and a half millennia ago. Living things seem to possess innate purposes or goals. It would make no sense to describe an atom or the moon as striving to achieve something, yet organisms behave this way all the time, battling for survival, seeking out mates, exploring new environments. Aristotle introduced the term “teleology” to describe the tendency of living things to be drawn towards a future goal.

With the advent of the modern scientific era in the 17th century, physicists found no place for teleology. Isaac Newton envisaged a clockwork universe in which every particle of matter moved precisely in accordance with fixed universal laws, without regard to any destiny or purpose. Thus there opened up a vast chasm between physics and biology. So, given that both non-living and living things are made of the same sorts of atoms, whence comes the inner drive of organisms?

It took about two hundred years for biology to start catching up with physics, becoming a true science only with the publication of Charles Darwin’s On the Origin of Species in 1859. But whatever the greatness of the theory of evolution, it did nothing to close the gap with physics. Biologists concern themselves with what life does, not what it is. Although Darwin gave a convincing account of how life on earth has evolved over billions of years from simple microbes to the richness and diversity of the biosphere we see today, he refused to be drawn on how life got going in the first place. “One might as well think of origin of matter,” he quipped to a friend. The transformation of matter from the realm of physics and chemistry to the realm of biology remained impenetrable.

This state of affairs persisted until the middle of the 20th century. Although historians disagree about the originality of Schrödinger’s ideas, there is no doubt his book proved extremely influential and ushered in the era of molecular biology in the early 1950s. Schrödinger surmised that genetic information must be stored in some sort of giant molecule. Following this pointer, James Watson and Francis Crick zeroed in on DNA and discovered its famous double-helix structure. They concluded that genes are segments of DNA in which information is encoded in the specific arrangement of atoms.

The informational part of DNA consists of four molecular building blocks, often referred to simply by the letters A, C, T and G. Sequences of these letters spell out the manual for building the organism. The instructions are read out and implemented by complex molecular machinery with finely honed control mechanisms. Because the instructions are in code, they must first be decoded, or translated, by a mathematical procedure, before the cell can implement them. It took a few more years for scientists to crack the code and reveal the language of life, and several decades before DNA sequencing became commonplace.

The rapid progress made in molecular biology in the 1950s coincided with major advances in computing, and it was soon clear that the two subjects were intertwined. DNA, which serves as a database storage facility, is like the hard drive on a computer. The instructions etched into DNA and the associated readout and translation machinery resemble computer operating systems and software.

It turns out that the analogy goes far deeper. Human DNA codes for about 20,000 genes, but only a fraction of them are “expressed” at any one time. By expressed, I mean they get read out, causing a specific protein to be manufactured. Put simply, your genes are all there inside you, but they may be either “on” or “off” depending on circumstances. Genes are often linked via chemical messengers because a gene that codes for one protein may serve to switch others on or off. In this way, genes can form networks, sometimes of great complexity. It is the networks, rather than individual genes, that carry out the lion’s share of regulatory and control functions. In this respect, biology closely resembles electronics. Biologists routinely refer to the “wiring diagram” of gene networks, and have discovered that some arrangements form modules that can behave like the logic gates in a computer; by wiring together many such gates, cells are able to carry out complex computations.

Why is life seemingly so invested in the computational process? The answer is that most organisms live in an unpredictable and fluctuating environment, and the ability of cells to garner information from their surroundings, process it and compute an optimal response confers an obvious survival advantage. And it is not just individual cells that process information. Cells can signal each other with both chemical messengers and physical forces, enabling them to cooperate. This is particularly striking in multi-celled life forms, in which pathways of information flow pervade the entire organism. Nor does it stop there. Signals can also be exchanged between organisms, for example, when ants or bees engage in collective decision-making while choosing a nest, or when flocking birds coordinate their flight. Even ecosystems possess elaborate networks of information flow. Earth’s biosphere is the original World Wide Web.

Today, biologists routinely frame their descriptions of life in informational terms. Concepts such as the genetic code, gene sequence transcription and translation, gene editing, signalling molecules, regulation and control, and logic functions all stem directly from the world of computing and information processing. Familiar that may be, but it is not at all the language used by physicists and chemists. Ask them “What is life?” and you are likely to be told about molecular shapes and binding energies, reaction rates, intermolecular forces and heat production.

The mismatch of these two descriptive fields is stark, but because physics and biology are thoroughly siloed it is rarely problematic. When it comes to the origin of life, however, no explanation can be forthcoming without a conceptual thread that describes how physics and chemistry somehow turned into biology.

How on earth can a haphazard mishmash of chemicals spontaneously organise itself into a system that stores digital information and processes it using a mathematical code? Barring a stupendously improbable freak accident, there are only two possible answers. The first is some sort of divine intervention – intelligent design. The second is a fundamentally new type of organising principle at work in complex systems.

When Schrödinger wrote his book he addressed the question of whether life, with all its remarkable properties, could ever be explained by known physics. He boldly left open the possibility that new laws of nature may be required. His suggestion has always been regarded as deeply heretical: the standard view is that known physics can explain everything. In my opinion that is just wishful thinking, a type of promissory reductionism. In truth, standard physics and chemistry have spectacularly failed to explain life’s origin. What little progress has been made in synthesising some small chemical components of life has always required an intelligent designer (aka a clever scientist) and a fancy lab. More seriously, nobody has a clue about how this freshly minted organic hardware can create its own software. How can molecules write code?

If there are indeed hitherto undiscovered “laws of life” connecting biology and physics, information and matter, software and hardware, then this missing link needs to assign to information some sort of causal role or traction over physical states, in order to make a difference. But information is an abstract concept that derives from the realm of human discourse. Can it also be a physical quantity? That is, can information carry clout when it comes to material objects? It turns out that the answer is yes.

Enter the demon

Viewed through the lens of history, the first clue to how information might enter the laws of physics came more than 150 years ago, with a conundrum posed by the Scottish physicist James Clerk Maxwell. A giant among physicists, it was he who unified the laws of electricity and magnetism, showed that light is an electromagnetic wave and predicted the existence of radio waves.

Maxwell also made seminal contributions to the theory of heat, and it was on this subject that he threw a grenade into the heart of physics. In a letter to a friend penned in 1867, Maxwell presented the germ of an idea for how to conjure order out of chaos, seemingly for free. It was cast in terms of a “thought experiment” rather than a practical proposition, designed to illuminate one of the fundamental laws of the universe. That law is known as the second law of thermodynamics, and it defines the arrow of time so evident in daily life. It has many forms, but the one that Maxwell focused on is the flow of heat, which (according to the law) is invariably from hot to cold. We never encounter, for example, a bath of tepid water spontaneously freezing at one end and boiling at the other. Heat always evens itself out.

But that’s on an everyday scale. What would happen, mused Maxwell, if we could view the individual molecules of matter, rushing about chaotically, and tried to manipulate them? He envisaged a microscopic being, soon to be dubbed Maxwell’s demon, inside a sealed box of gas, able to not only perceive molecules but also sort them into fast and slow categories. To achieve this separation, Maxwell equipped his demon with a screen down the middle of the box and a little shutter to open or close a tiny hole in the screen.

The thought experiment starts with the gas (oxygen, say) on either side of the screen being in the same state; that is, having the same temperature, pressure and density everywhere. Such a featureless, uniform state is referred to as “thermodynamic equilibrium”, and if the system is left undisturbed it will persist forever. Crucially, the molecules of gas don’t all move at the same speed. Some molecules may move twice as fast as the average, some half as fast etc. While the average speed remains fixed if the temperature is constant, any given molecule will always be changing its speed and direction as it collides randomly with others.

Now comes the clever bit. A nimble demon could open the shutter when a fast molecule approached from the right, allowing it to go through the hole into the left-hand chamber. Similarly, the demon could permit slow molecules to traverse in the other direction. Over time the average speed of those on the left would grow greater than those on the right, which implies the gas on the left will become hotter than that on the right. With demonic interference, equilibrium could become disequilibrium. Heat would have been transferred from cold to hot in violation of the cherished second law, a prospect as shocking as water flowing uphill or a broken egg reassembling itself. Using the established temperature gradient, the demon could go on to exploit it by running an engine to do useful work such as lifting a weight. In effect, the demon would have created a type of perpetual motion machine by converting random molecular chaos into orderly directed activity – an inventor’s dream.

The secret of the demon’s prowess lies with its ability to garner information about individual careering molecules and use it to cheat the second law. Now it’s wrong to say that heat never goes from cold to hot: my refrigerator does it all the time, taking cool air from the inside and pumping it out into the warm kitchen. But it comes at a price – in this case my electricity bills. Obviously the demon isn’t wired to the grid; it has no external power source. Instead, it has access to information. The conclusion is obvious: information can behave as a type of fuel. Information can do work; it can run an engine.

So far I have been using the word “information” in an informal everyday sense. But if it is deemed to have actual physical effects, we need a way to quantify it. That step was taken in the late 1940s by an engineer named Claude Shannon. His brief (from the US Army) was to figure out how to manage information so as to optimise data transmission down a noisy telephone line or radio link. Shannon’s definition of information is disarmingly simple. If I toss a fair coin, I know there is a 50 per cent chance it will come down heads. If I don’t look, I am completely uncertain of the outcome, but once I obtain the information – heads or tails – that uncertainty is reduced to zero. Shannon called this reduction in uncertainty one “binary digit of information”, or bit for short. (Bits and bytes are now familiar terms in computer jargon – a byte is 8 bits.)

Armed with that quantification, I can now say how much information is worth, energy-wise, assuming a 100 per cent efficient demon. The answer is 3 x 10−21 joules. Not much, in fact. To put that number into context, it would take about a hundred trillion trillion bits to boil a kettle. Obviously, nobody will be making an information-powered car any time soon, but the importance of Maxwell’s demon is the point of principle: information as a quantity, rather than an abstraction, enters the laws of physics in a precise way.

What Maxwell surely never imagined is that, 150 years after he wrote his letter, a real demon would be made in the city of his birth. In 2007, David Leigh’s research group at Edinburgh University built a tiny information engine out of customised molecules. The whole thing was just a few nanometres in size, and consisted of a ring that can slide back and forth on a rod with stoppers at the end (like a dumbbell). Natural thermal agitations caused the ring to jiggle about along the rod, but the side-to-side movement could be blocked by another molecule operating a bit like Maxwell’s shutter mechanism. The shutter could be controlled with a laser depending on the information gleaned about the speed of the ring. The researchers were able to confirm that “information known to a gate-operating demon” can indeed serve as a fuel.

Designing demons, or information engines, has become something of a cottage industry among nanotechnologists. One device, manufactured in South Korea, is able to achieve an astonishing 98.5 per cent efficiency converting information into work. My favourite is an information-powered refrigerator made at a research lab in Finland.

I should explain that information engines are not quite the free lunch they may appear to be. To achieve effects on an everyday scale the demon has to keep on going, repeating its antics and accumulating more and more energy available for use. But that implies accumulating and processing more and more information, until the poor demon’s brain gets clogged. To reset everything the demon has to be brainwashed, and that step always costs more energy than the demon has sequestered. So don’t expect a big impact on kitchen appliances. But on a nanoscale it is clear that garnering and exploiting information about the molecular environment can be very important. And it is on a nanoscale that life works its special brand of magic.

The demon in the machine

It is often said that whatever human designers come up with, nature got there first. And it’s true that biology discovered all the paraphernalia of modern engineering – pumps, ratchets, rotors, pulleys etc. – long before we manufactured them. The same is true of demons. Life is adept at nanotechnology: living cells are replete with demonic molecules playing the margins of thermo-dynamics and gaining an advantage. For example, for DNA replication, a little machine crawls along the strand and builds a copy with almost perfect energy efficiency. A two-legged molecule called kinesin delivers cargo by walking along fibres inside cells, all the while buffeted by a hail of thermally agitated water molecules. In the face of this molecular storm, kinesin is able to convert the random bombardment into directed one-way motion with minimal expenditure of energy. In the brain, Maxwell demons sense the approach of neural signals and use this information to open and close shutters in the walls of axons – the “wires” by which neurons signal each other – allowing sodium and potassium ions to enter or leave. In this way, they send the pulse on its way. The thermodynamic efficiency is so great that the human brain, which has the processing power of a megawatt supercomputer, manages to operate on an energy budget equivalent to that of a small light bulb.

Maxwell’s demon cracks open a door into a world that connects matter and information, but to fully explain life, we need to go beyond merely saving on the energy bills. The secret of life lies with its ability to couple patterns of information to patterns of chemical processes in a manner that displays some form of teleology, the tendency to be drawn towards a future goal. But there is a profound conceptual obstacle lurking here. Unlike physical quantities such as mass and electric charge, information isn’t something that can be pinned down and definitively located on a particle of matter. Biological information, such as the instructions encoded in a gene, are meaningful only in the context of an entire cell. Pick a particular “letter” on a segment of DNA: it is just a molecule. It doesn’t come with a label indicating it is “charged with biologically relevant information”; you can’t tell by looking whether it is a critical letter in the gene for a vital protein, or just any old molecule. If this letter were located in a so-called junk sequence (for example, a non-functional remnant of an abandoned gene) it would look exactly the same. You can discern which sequences of letters represent coded instructions only by considering the whole system of translation and protein production. In a nutshell, biologically relevant information means information with functionality. And functionality makes sense only in a global context.

If there is indeed a new type of organising principle at work in biology – a life principle – then the laws of life differ fundamentally from traditional laws of physics. What might such novel laws look like? Here is an analogy. The game of chess operates according to fixed laws: each piece can move in accordance with designated rules that refer solely to neighbouring squares on the board. Although there is a vast number of possible games, and hence of possible downstream board configurations, it is still the case that most configurations are not possible because the rules permit only certain pathways of play. If you place the pieces randomly on the board and ask whether such a state of play could ever arise, the answer is very likely to be no. There would simply be no way to get there from the opening configuration by sticking to the standard rules.

Now imagine an extended game, call it chess-plus, where the rules can change according to the overall state of play. For example, “If white is two or more pawns ahead, then black may move pawns backwards as well as forwards.” (That’s a daft rule, but I’m trying to make a point.) The new rule refers to the complete context of the game. As a result, patterns may emerge on the board that would be not merely unlikely but also strictly impossible by the standard rules of chess. So by introducing system-level, or contextual, rules, new pathways to complexity are opened up.

I think something like this is going on in living systems: the “laws of life” cannot be stated in terms of individual molecules; they can only be stated at a systems level. Yet with such laws, new forms of complexity can emerge, including, I submit, the pathway from non-life to life. My colleagues at Arizona State University have run simplified computer models for a system analogous to chess-plus, and they confirm that system-level rules can indeed lead to novel patterns of complexity that evolve in an open-ended way, as we hoped.

The next frontier of science

Whatever the merits of computer models, there is no substitute for experiments. Unfortunately life’s origin is lost in the mists of time, while experiments to cook up life in the lab have barely got to first base. The burgeoning field of synthetic biology, in which existing life is re-engineered at the molecular level, might offer a glimpse of something novel at work.

There is another scientific field opening up, one that could unveil something fundamentally new: the application to biology of quantum mechanics, Schrödinger’s own brainchild. The essence of quantum mechanics is uncertainty, as encapsulated in Heisenberg’s Uncertainty Principle, which says it’s impossible to know all the properties of a particle at the same moment. For example, an atom may have a well-defined speed but a fuzzy position. Or the other way around. You can never know both the speed and position. When a measurement is made, uncertainty is replaced by certainty; one might choose to measure the position of an atom and determine it to be at a specific location, for instance. The reduction in uncertainty on measurement amounts to the acquisition of information, as it does with tossed coins. But in quantum systems the uncertainty is not just the result of human ignorance: it is inherent in it, a basic feature of nature. Thus information lies at the very heart of quantum physics.

In recent years, scientists have found tantalising hints that life is exploiting quantum effects in some specific cases, including photosynthesis and bird navigation. The controversial subject of quantum biology is attracting much attention. Most intriguing from my point of view are the experiments of Gábor Vattay of Eötvös Loránd University, Budapest, who has found evidence that many key molecules used by life have unusual finely tuned quantum properties. One explanation is that evolution has selected these properties for reasons of chemical efficiency. But a more intriguing possibility is that the special characteristics of these molecules relate to the transfer and organisation of information – a hidden quantum code – and that it is at the level of these large organic molecules that the new principles I have been advocating are manifested.

The next frontier of science lies at the intersection of nanotechnology, quantum physics, chemistry and biology. It is here, where physics meets life, that new phenomena will be discovered, and Schrödinger’s 75-year-old question – What Is Life? – finally answered.

Paul Davies

Paul Davies is a physicist and astrobiologist at Arizona State University, where he is Regents’ Professor and Director of the Beyond Center for Fundamental Concepts in Science. His latest book is The Demon in the Machine.

February 2019

From the front page

Image of the Lower Darling near Wilcannia

New developments in watergate scandal

The EAA deal is not the only buyback that warrants scrutiny

Image of Bill Shorten and Scott Morrison

The vision thing

So far, the federal election campaign of 2019 is a surprise return to the politics of yesteryear


The F45 gym revolution

The Australian fitness franchise is high-fiving its way around the world

Image of Dame Edna Everage

Much ado about Barry

On Humphries’s brand of confronting comedy and the renaming of the Barry Award

In This Issue

What the government thinks you’re worth

Our nation’s economists have a price on your head, dead or alive

Still from The Front Runner

The spectacle of a political scandal: Jason Reitman’s ‘The Front Runner’ and Paolo Sorrentino’s ‘Loro’

New films about ’80s presidential hopeful Gary Hart and Italy’s controversial Silvio Berlusconi both miss the mark

Image of Pete Shelley and Buzzcocks

Pete Shelley’s Buzzcocks: 40 years on

The history and legacy of a punk pioneer

The 9th Asia Pacific Triennial of Contemporary Art at QAGOMA

Politics, culture and colour collide in Brisbane

More in The Monthly Essays

Clive Berghofer

The parable of Lyle Shelton and Dianne Thorley

When Christianity, climate change and drought collided in Toowoomba

Image of Abbot Point

How Australia’s coal madness led to Adani

The real reasons keeping the Carmichael mine alive

Image of Adani’s thermal power plant at Mundra, India

Report from India: Tracing Gautam Adani’s ruthless ambition

The parallel rise of the coal baron and Prime Minister Narendra Modi

Image of Bill Shorten

Bill Shorten: between fear and ideas

The Opposition leader talks about the road ahead for Labor

Read on

Image of Bill Shorten and Scott Morrison

The vision thing

So far, the federal election campaign of 2019 is a surprise return to the politics of yesteryear

Image of Dame Edna Everage

Much ado about Barry

On Humphries’s brand of confronting comedy and the renaming of the Barry Award

Image from ‘Eat the Problem’

Can ‘Eat the Problem’ solve the problem?

Mona’s new project explores our fraught ethics of consumption

Image from ‘Janet Laurence: After Nature’

‘Janet Laurence: After Nature’ at the MCA

This survey offers a root and branch study of the natural world’s fragility