The Final Frontier: Are We Reaching the Limits of Science?

Ten years after the publication of The End of Science, John Horgan says the limits of scientific inquiry are more visible than ever.

By John Horgan
Sep 22, 2006 12:00 AMMay 21, 2019 7:39 PM

Newsletter

Sign up for our email newsletter for the latest science news
 

Ten years ago, science journalist John Horgan published a provocative book suggesting that scientists had solved most of the universe's major mysteries. The outcry was loud and immediate. Given the tremendous advances since then, Discover invited Horgan to revisit his argument and seek out the greatest advances yet to come.

One of my most memorable moments as a journalist occurred in December 1996, when I attended the Nobel Prize festivities in Stockholm. During a 1,300-person white-tie banquet presided over by Sweden's king and queen, David Lee of Cornell University, who shared that year's physics prize, decried the "doomsayers" claiming that science is ending. Reports of science's death "are greatly exaggerated," he said.

Lee was alluding to my book, The End of Science, released earlier that year. In it, I made the case that science—especially pure science, the grand quest to understand the universe and our place in it—might be reaching a cul-de-sac, yielding "no more great revelations or revolutions, but only incremental, diminishing returns." More than a dozen Nobel laureates denounced this proposition, mostly in the media but some to my face, as did the White House science advisor, the British science minister, the head of the Human Genome Project, and the editors in chief of the journals Science and Nature.

Over the past decade scientists have announced countless discoveries that seem to undercut my thesis: cloned mammals (starting with Dolly the sheep), a detailed map of the human genome, a computer that can beat the world champion in chess, brain chips that let paralyzed people control computers purely by thought, glimpses of planets around other stars, and detailed measurements of the afterglow of the Big Bang. Yet within these successes there are nagging hints that most of what lies ahead involves filling in the blanks of today's big scientific concepts, not uncovering totally new ones.

Even Lee acknowledges the challenge. "Fundamental discoveries are becoming more and more expensive and more difficult to achieve," he says. His own Nobel helps make the point. The Russian physicist Pyotr Kapitsa discovered the strange phenomenon known as superfluidity in liquid helium in 1938. Lee and two colleagues merely extended that work, showing that superfluidity also occurs in a helium isotope known as helium 3. In 2003, yet another Nobel Prize was awarded for investigations of superfluidity. Talk about anticlimactic!

Optimists insist that revolutionary discoveries surely lie just around the corner. Perhaps the big advance will spring from physicists' quest for a theory of everything; from studies of "emergent" phenomena with many moving parts, such as ecologies and economies; from advances in computers and mathematics; from nanotechnology, biotechnology, and other applied sciences; or from investigations of how brains make minds. "I can see problems ahead of all sizes, and clearly many of them are soluble," says physicist and Nobel laureate Philip Anderson (who, in 1999, coined the term Horganism to describe "the belief that the end of science . . . is at hand"). On the flip side, some skeptics contend that science can never end because all knowledge is provisional and subject to change.

For the 10th anniversary of The End of Science I wanted to address these new objections. What I find is that the limits of scientific inquiry are more visible than ever. My goal, now as then, is not to demean valuable ongoing research but to challenge excessive faith in scientific progress. Scientists pursuing truth need a certain degree of faith in the ultimate knowability of the world; without it, they would not have come so far so fast. But those who deny any evidence that challenges their faith violate the scientific spirit. They also play into the hands of those who claim that "science itself is merely another kind of religion," as physicist Lawrence Krauss of Case Western Reserve University warns.

Argument: Predictions that science is ending are old hat, and they have always proved wrong. The most common response to The End of Science is the "that's what they thought then" claim. It goes like this: At the end of the 19th century, physicists thought they knew everything just before relativity and quantum mechanics blew physics wide open. Another popular anecdote involves a 19th-century U.S. patent official who quit his job because he thought "everything that can be invented has been invented." In fact, the patent-official story is purely apocryphal, and the description of 19th-century physicists as smug know-it-alls is greatly exaggerated. Moreover, even if scientists had foolishly predicted science's demise in the past, that does not mean all such predictions are equally foolish.

The "that's what they thought then" response implies that because science advanced rapidly over the past century or so, it must continue to do so, possibly forever. This is faulty inductive reasoning. A broader view of history suggests that the modern era of explosive progress is an anomaly—the product of a unique convergence of social, economic, and political factors—that must eventually end. Science itself tells us that there are limits to knowledge. Relativity theory prohibits travel or communication faster than light. Quantum mechanics and chaos theory constrain the precision with which we can make predictions. Evolutionary biology reminds us that we are animals, shaped by natural selection not for discovering deep truths of nature but for breeding.

The greatest barrier to future progress in science is its past success. Scientific discovery resembles the exploration of the Earth. The more we know about our planet, the less there is to explore. We have mapped out all the continents, oceans, mountain ranges, and rivers. Every now and then we stumble upon a new species of lemur in an obscure jungle or an exotic bacterium in a deep-sea vent, but at this point we are unlikely to discover something truly astonishing, like dinosaurs dwelling in a secluded cavern. In the same way, scientists are unlikely to discover anything surpassing the Big Bang, quantum mechanics, relativity, natural selection, or genetics.

Just over a century ago, the American historian Henry Adams observed that science accelerates through a positive feedback effect: Knowledge begets more knowledge. This acceleration principle has an intriguing corollary. If science has limits, then it might be moving at maximum speed just before it hits the wall. I am not the only science journalist who suspects we have entered this endgame. "The questions scientists are tackling now are a lot narrower than those that were being asked 100 years ago," Michael Lemonick wrote in Time magazine recently, because "we've already made most of the fundamental discoveries."

Argument: Science is still confronting huge remaining mysteries, like where the universe came from. Other reporters like to point out that there is "No End of Mysteries," as a cover story in U.S. News & World Report put it. But some mysteries are probably unsolvable. The biggest mystery of all is the one cited by Stephen Hawking in A Brief History of Time: Why is there something rather than nothing? More specifically, what triggered the Big Bang, and why did the universe take this particular form rather than some other form that might not have allowed our existence?

Scientists' attempts to solve these mysteries often take the form of what I call ironic science—unconfirmable speculation more akin to philosophy or literature than genuine science. (The science is ironic in the sense that it should not be considered a literal statement of fact.) A prime example of this style of thinking is the anthropic principle, which holds that the universe must have the form we observe because otherwise we would not be here to observe it. The anthropic principle, championed by leading physicists such as Leonard Susskind of Stanford University, is cosmology's version of creationism.

Another example of ironic science is string theory, which for more than 20 years has been the leading contender for a "theory of everything" that explains all of nature's forces. The theory's concepts and jargon have evolved over the past decade, with two-dimensional membranes replacing one-dimensional strings, but the theory comes in so many versions that it predicts virtually everything—and hence nothing at all. Critics call this the "Alice's restaurant problem," a reference to a folk song with the refrain, "You can get anything you want at Alice's restaurant." This problem leads Columbia mathematician Peter Woit to call string theory "not even wrong" in his influential blog of the same title, which refers to a famous put-down by Wolfgang Pauli.

Although Woit echoes the criticisms of string theory I made in The End of Science, he still hopes that new mathematical techniques may rejuvenate physics. I have my doubts. String theory already represents an attempt to understand nature through mathematical argumentation rather than empirical tests. To break out of its current impasse, physics desperately needs not new mathematics but new empirical findings—like the discovery in the late 1990s that the expansion of the universe is accelerating. This is by far the most exciting advance in physics and cosmology in the last decade, but it has not led to any theoretical breakthrough. Meanwhile, the public has become increasingly reluctant to pay for experiments that can push back the frontier of physics. The Large Hadron Collider will be the world's most powerful particle accelerator when it goes online next year, and yet it is many orders of magnitude too weak to probe directly the microrealm where strings supposedly dwell.

Argument: Science can't ever come to an end because theories, by their very nature, keep being overturned. Many philosophers—and a surprising number of scientists—accept this line, which essentially means that all science is ironic. They adhere to the postmodern position that we do not discover truth so much as we invent it; all our knowledge is therefore provisional and subject to change. This view can be traced back to two influential philosophers: Karl Popper, who held that theories can never be proved but only disproved, or falsified, and Thomas Kuhn, who contended that theories are not true statements about reality but only temporarily convenient suppositions, or paradigms.

If all our scientific knowledge were really this flimsy and provisional, then of course science could continue forever, with theories changing as often as fads in clothing or music. But the postmodern stance is clearly wrong. We have not invented atoms, elements, gravity, evolution, the double helix, viruses, and galaxies; we have discovered them, just as we discovered that Earth is round and not flat.

When I spoke to him more than 10 years ago, philosopher Colin McGinn, now at the University of Miami, rejected the view that all of science is provisional, saying, "Some of it is, but some of it isn't!" He also suggested that, given the constraints of human cognition, science will eventually reach its limits; at that point, he suggests, "religion might start to appeal to people again." Today, McGinn stands by his assertion that science "must in principle be completable" but adds, "I don't, however, think that people will or should turn to religion if science comes to an end." Current events might suggest otherwise.

Argument: Reductionist science may be over, but a new kind of emergent science is just beginning. In his new book, A Different Universe, Robert Laughlin, a physicist and Nobel laureate at Stanford, concedes that science may in some ways have reached the "end of reductionism," which identifies the basic components and forces underpinning the physical realm. Nevertheless, he insists that scientists can discover profound new laws by investigating complex, emergent phenomena, which cannot be understood in terms of their individual components.

Physicist and software mogul Stephen Wolfram advances a similar argument from a more technological angle. He asserts that computer models called cellular automata represent the key to understanding all of nature's complexities, from quarks to economies. Wolfram found a wide audience for these ideas with his 1,200-page self-published opus A New Kind of Science. He asserts that his book has been seen as "initiating a paradigm shift of historic importance in science, with new implications emerging at an increasing rate every year."

Actually, Wolfram and Laughlin are recycling ideas propounded in the 1980s and 1990s in the fields of chaos and complexity, which I regard as a single field—I call it chaoplexity. Chaoplexologists harp on the fact that simple rules, when followed by a computer, can generate extremely complicated patterns, which appear to vary randomly as a function of time or scale. In the same way, they argue, simple rules must underlie many apparently noisy, complicated aspects of nature.

So far, chaoplexologists have failed to find any profound new scientific laws. I recently asked Philip Anderson, a veteran of this field, to list major new developments. In response he cited work on self-organized criticality, a mathematical model that dates back almost two decades and that has proved to have limited applications. One reason for chaoplexity's lack of progress may be the notorious butterfly effect, the notion that tiny changes in initial conditions can eventually yield huge consequences in a chaotic system; the classic example is that the beating of a butterfly's wings could eventually trigger the formation of a tornado. The butterfly effect limits both prediction and retrodiction, and hence explanation, because specific events cannot be ascribed to specific causes with complete certainty.

Argument: Applied physics could still deliver revolutionary breakthroughs, like fusion energy. Some pundits insist that, although the quest to discover nature's basic laws might have ended, we are now embarking on a thrilling era of gaining control over these laws. "Every time I open the newspaper or search the Web," the physicist Michio Kaku of the City University of New York says, "I find more evidence that the foundations of science are largely over, and we are entering a new era of applications." Saying that science has ended because we understand nature's basic laws, Kaku contends, is like saying that chess ends once you understand the rules.

I see some validity to this point; that is why The End of Science focused on pure rather than applied research. But I disagree with techno-evangelists such as Kaku, Eric Drexler, and Ray Kurzweil that nanotechnology and other applied fields will soon allow us to manipulate the laws of nature in ways limited only by our imaginations. The history of science shows that basic knowledge does not always translate into the desired applications.

Nuclear fusion—a long-sought source of near-limitless energy and one of the key applications foreseen by Kaku—offers a prime example. In the 1930s, Hans Bethe and a handful of other physicists elucidated the basic rules governing fusion, the process that makes stars shine and thermonuclear bombs explode. Over the past 50 years, the United States alone has spent nearly $20 billion trying to control fusion in order to build a viable power plant. During that time, physicists repeatedly touted fusion as the energy source of the future.

The United States and other nations just agreed to invest another $13 billion to build the International Thermonuclear Experimental Reactor in France. Still, even optimists acknowledge that fusion energy faces formidable technical, economic, and political barriers all the same. William Parkins, a nuclear physicist and veteran of the Manhattan Project, recently advocated abandoning fusion-energy research, which he called "as expensive as it is discouraging." If there are breakthroughs here, the current generation probably will not live to see them.

Argument: We are on the verge of a breakthrough in applied biology that will allow people to live essentially forever. The potential applications of biology are certainly more exciting these days than those of physics. The completion of the Human Genome Project and recent advances in cloning, stem cells, and other fields have emboldened some scientists to predict that we will soon conquer not only disease but aging itself. "The first person to live to 1,000 may have been born by 1945," declares computer scientist-turned-gerontologist Aubrey de Grey, a leader in the immortality movement (who was born in 1963).

Many of de Grey's colleagues beg to differ, however. His view "commands no respect at all within the informed scientific community," 28 senescence researchers declared in a 2005 journal article. Indeed, evolutionary biologists warn that immortality may be impossible to achieve because natural selection designed us to live only long enough to reproduce and raise our children. As a result, senescence does not result from any single cause or even a suite of causes; it is woven inextricably into the fabric of our bodies. The track record of two fields of medical research—gene therapy and the war on cancer—should also give the immortalists pause.

In the early 1990s, the identification of specific genes underlying inherited disease—such as Huntington's chorea, early- onset breast cancer, and immune deficiency syndrome—inspired researchers to devise therapies to correct the genetic malformations. So far, scientists have carried out more than 350 clinical trials of gene therapy, and not one has been an unqualified success. One 18-year-old patient died in a trial in 1999, and a promising French trial of therapy for inherited immune deficiency was suspended last year after three patients developed leukemia, leading The Wall Street Journal to proclaim that "the field seems cursed."

The record of cancer treatment is also dismal. Since 1971, when President Richard Nixon declared a "war on cancer," the annual budget for the National Cancer Institute has increased from $250 million to $5 billion. Scientists have gained a much better understanding of the molecular and genetic underpinnings of cancer, but a cure looks as remote as ever. Cancer epidemiologist John Bailar of the University of Chicago points out that overall cancer mortality rates in the United States actually rose from 1971 until the early 1990s before declining slightly over the last decade, predominantly because of a decrease in the number of male smokers. No wonder then that experts like British gerontologist Tom Kirkwood call predictions of human immortality "nonsense.

Argument: Understanding the mind is still a huge, looming challenge. When I met the British biologist Lewis Wolpert in London in 1997, he declared that The End of Science was "absolutely appalling!" He was particularly upset by my critique of neuroscience. The field was just beginning, not ending, he insisted. He stalked away before I could tell him that I thought his objection was fair. I actually agree that neuroscience in many ways represents science's most dynamic frontier.

Over the past decade, membership in the Society for Neuroscience has surged by almost 50 percent to 37,500. Researchers are probing the brain with increasingly powerful tools, including superfast magnetic resonance imagers and microelectrodes that can detect the murmurs of individual brain cells. Nevertheless, the flood of data stemming from this research has failed so far to yield truly effective therapies for schizophrenia, depression, and other disorders, or a truly persuasive explanation of how brains make minds. "We're still in the tinkering stage, preparadigm and pretheoretical," says V.S. Ramachandran of the University of California at San Diego. "We're still at the same stage physics was in the 19th century."

The postmodern perspective applies all too well to fields that attempt to explain us to ourselves. Theories of the mind never really die; they just go in and out of fashion. One astonishingly persistent theory is psychoanalysis, which Sigmund Freud invented a century ago. "Freud . . . captivates us even now," Newsweek proclaimed just last March. Freud's ideas have persisted not because they have been scientifically confirmed but because a century's worth of research has not produced a paradigm powerful enough to render psychoanalysis obsolete once and for all. Freudians cannot point to unambiguous evidence of their paradigm's superiority, but neither can proponents of more modern paradigms, whether behaviorism, evolutionary psychology, or psychopharmacology.

Science's best hope for understanding the mind is to crack the neural code. Analogous to the software of a computer, the neural code is the set of rules or syntax that transforms the electrical pulses emitted by brain cells into perceptions, memories, and decisions. The neural code could yield insights into such ancient philosophical conundrums as the mind-body problem and the riddle of free will. In principle, a solution to the neural code could give us enormous power over our psyches, because we could monitor and manipulate brain cells with exquisite precision by speaking to them in their own language.

But Christof Koch of Caltech warns that the neural code may never be totally deciphered. After all, each brain is unique, and each brain keeps changing in response to new experiences, forming new synaptic connections between neurons and even—contrary to received wisdom a decade ago—growing new neurons. This mutability (or "plasticity," to use neuroscientists' preferred term) immensely complicates the search for a unified theory of the brain and mind. "It is very unlikely that the neural code will be anything as simple and as universal as the genetic code," Koch says.

Argument*: If you really believe science is over, why do you still write about it? Despite my ostensible pessimism, I keep writing about science, I also teach at a science-oriented school, and I often encourage young people to become scientists. Why? First of all, I could simply be wrong—there, I've said it—that science will never again yield revelations as monumental as evolution or quantum mechanics. A team of neuroscientists may find an elegant solution to the neural code, or physicists may find a way to confirm the existence of extra dimensions.

In the realm of applied science, we may defeat aging with genetic engineering, boost our IQs with brain implants, or find a way to bypass Einstein's ban on faster-than-light travel. Although I doubt these goals are attainable, I would hate for my end-of-science prophecy to become self-fulfilling by discouraging further research. "Is there a limit to what we can ever know?" asks David Lee. "I think that this is a valid question that can only be answered by vigorously attempting to push back the frontiers."

Even if science does not achieve such monumental breakthroughs, it still offers young researchers many meaningful opportunities. In the realm of pure research, we are steadily gaining a better understanding of how galaxies form; how a single fertilized egg turns into a fruit fly or a congressman; how synaptic growth supports long-term memory. Researchers will surely also find better treatments for cancer, schizophrenia, AIDS, malaria, and other diseases; more effective methods of agricultural production; more benign sources of energy; more convenient contraception methods.

Most exciting to me, scientists might help find a solution to our most pressing problem, warfare. Many people today view warfare and militarism as inevitable outgrowths of human nature. My hope is that scientists will reject that fatalism and help us see warfare as a complex but solvable problem, like AIDS or global warming. War research—perhaps it should be called peace research—would seek ways to avoid conflict. The long-term goal would be to explore how humanity can make the transition toward permanent disarmament: the elimination of armies and the weapons they use. What could be a grander goal?

In the last century, scientists split the atom, cracked the genetic code, landed spacecraft on the moon and Mars. I have faith—yes, that word again—that scientists could help solve the problem of war. The only question is how, and how soon. Now that would be an ending worth celebrating.


*Note: John Horgan "whacked" this paragraph for being wishy-washy in Horganism, his new blog. 

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group