Skip to main content

Rise of the Machines

As computers get smarter, experts examine the potential implications

In February, an IBM supercomputer, Watson, crushed two skilled competitors in a three-day human-versus-machine matchup on the popular TV quiz show Jeopardy! The battle was great television, featuring feisty human Davids against a mechanized Goliath with a “brain” crammed with almost unlimited facts.

Watson might seem like the stuff of science fiction, but beyond the hype, Watson’s victory raises fascinating and, perhaps, uncomfortable questions about the implications of supercomputing and artificial intelligence.

Paul Humphreys watched the Jeopardy! contest. A professor in UVA’s philosophy department specializing in the philosophy of science, metaphysics and epistemology, he thought Watson was an impressive technological feat. However, who did Humphreys root for? “The people,” he says. “After all, it’s hard to cheer for a machine.”

Watson is only one of the latest generation of supercomputers; IBM’s Mira is a supercomputer that can run 10 quadrillion calculations a second. IBM explains Mira’s capabilities in human terms, “If every man, woman and child in the United States performed one calculation each second, it would take them almost a year to do as many calculations as Mira will do in one second.”

Are we ready for machines that can outthink us?

Award-winning—and controversial—inventor Ray Kurzweil believes that artificial intelligence will soon surpass us. Dubbed by Fortune magazine “the smartest or the nuttiest futurist on earth,” Kurzweil predicts that with the ever-increasing speed of computer development, human intellectual supremacy will last only another 16 years. Then he predicts that technology will become so complex that people won’t be able to understand it, and advances in computer technology will be made by computers themselves.

Like Kurzweil, artificial intelligence pioneer Marvin Minsky of MIT says that machines ultimately will be able to do anything a person can. Indeed, he goes further and writes: “Eventually, we will entirely replace our brains—using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives—with the option of immortality—and choose among other, unimagined capabilities as well.”

Has Watson moved us closer to the brave new world foreseen by Kurzweil and Minsky?

“Many of our students believe that a technological singularity is approaching—a moment when an artificial intelligence will emerge that is superior to humanity’s,” says Bryan Pfaffenberger, professor in UVA’s department of science, technology and society. “Coupled with this belief is technological determinism—specifically, the idea that technologies contain a built-in logic that determines technological outcomes, whether societies prefer those outcomes or not.

Computer science professors argue that Watson is a logical step in a long continuum of computer research. UVA professors from many fields see computers assuming even larger roles in daily life.

“We can bring the efficiencies of the cyberworld to the physical world,” says Kamin Whitehouse, UVA computer science professor. “The Internet experience is extremely efficient. Information is at your fingertips. Business transactions, banking and communication happen in microseconds. When you leave your desk, however, you immediately notice the inefficiencies of the physical world: congested highways, bad timing of traffic lights and long lines at restaurants. By embedding computation, sensing and control into physical objects and systems, we can use computational optimization to translate cyberworld efficiencies into our own.” Whitehouse expects that computers will be able to drive cars, run factories and handle air traffic control.

Kevin Skadron, also a UVA computer science professor, foresees computers serving as translators and directing the flow of traffic through cities. “These scenarios seem plausible, because they lend themselves well to the brute-force of massive computing capability,” he says. “Google is working on the computers that can direct traffic flow—though there are political and social challenges that may come up, depending on how much control drivers have to give up.”

Skadron says that computers might be good translators because the task has inputs and outputs that are well defined, languages with rules of syntax and lexicons of vocabulary words. “What makes translation hard is that human language depends very heavily on context and cultural conventions, such as when to use one word over another, for example, even though they both have the same meaning,” says Skadron. “Direct interpersonal interactions are much harder, because they depend even more on context, cultural conventions and the ability to read emotion and intent.”

Humphreys believes computers could become valuable tools in decision making for foreign policy and military affairs. “Making military decisions by computers goes back at least to the heyday of the Rand corporation in the 1950s,” says Humphreys. “The idea is that you can include the human costs of war in the computer’s assessment of the cost-benefit analysis, but unlike humans, the computer will not allow emotional states of anger, a desire for revenge, a sense of betrayal and so on to drive its decisions.”

These predictions seem plausible and, perhaps, inevitable. Yet, could they lead to Minsky’s computer that does anything a human can? It’s an idea that captures the imagination, but according to Whitehouse, we aren’t even close. At the moment, we can’t even make a machine that can do things that humans consider simple, such as walk into a store and buy a loaf of bread.

Computers lack intellectual capacities such as abstract thinking. Humans use past experiences to draw conclusions about new ones. Mary Lou Soffa, chair of the department of computer science, says children think abstractly when they learn the concept of “cow,” and when confronted with a real cow in a field or a purple one in a cartoon, can identify both as cows. Computers can’t do that.

If abstract reasoning were possible in computers, programmers would still be faced with the challenge of giving a computer the essence of human experience in a huge array of situations. Could we endow a computer with experience? Even if we could, no two humans experience an event exactly the same way, Soffa says.

At base, computers use binary code—the computer’s CPU only recognizes two states: on or off. Switches are arranged along Boolean guidelines so that these two states, on or off, create circuits capable of performing logical and mathematical operations. Whitehouse says computers are great at following rigid rules, but they lack flexibility. If common sense is the ability to know the “rules” of life while also knowing when to follow them and when to bend or break them, then the computers don’t have it and may never learn it.

Other impediments to making human-like machines include re-creating dexterity with robotics, huge memory requirements and the question of what physical form such a creation might take.

“Artificial intelligence isn’t a single technology,” says Pfaffenberger. “On the contrary, it’s a collection of entirely separate systems—imaging, control, inference and many more. Something like a technological singularity might well emerge in the future, but only because people chose to assemble these various systems in a certain way to create a certain type of intelligence.”

“It is important to keep in mind that computers have been approaching these humanlike tasks such as speech recognition, chess or playing Jeopardy! in a very nonhuman way—brute force,” says Skadron. “This suggests that computer abilities may develop differently than human intelligence, unless or until computers can actually mimic human neuronal behavior.”

“There’s something intrinsically ‘human’ about being human,” Soffa says. “We create ideas, art and solutions to complex problems. I think that in this there’s a spark or maybe even ‘magic’ that defies explanation. Often you ask somebody how they got an idea and they don’t know.”

Creativity is an enormous stumbling block for computers. At present, Skadron says, it is impossible to imagine how a computer could match the interactive knowledge, intuition, skill and problem-solving abilities of a doctor or an inventor.

Skadron anticipates a few seemingly simple tasks that he thinks will always require the human touch. “An example is retail sales,” says Skadron. “Unlike large corporate deals, which tend to be more data driven, retail sales are very much about appealing to the customer’s emotions, as well as reading subtle cues. Other tasks that have a strong emotional component, such as teaching or nursing, are also in the category of ‘impossible to imagine’ a computer taking over.”

Beyond the challenge of creating a computer with humanlike intelligence, fear may slow their creation.

Fear of all-powerful machines has many roots, and chief among them is the reluctance to surrender control. This concern provides endless grist for the pop culture mill. Some well-known manifestations are the malevolent computer HAL in 2001: A Space Odyssey and Arnold Schwarzenegger’s character in The Terminator, a cyborg created by a computer system at war with the humans who built it.

Skadron asks: If computers can do everything humans do, and do those things better, does the world need humans at all? “Ultimately, what is so scary and provocative about a Kurzweil scenario in which we download our brains into silicon is that at that point, many of our most basic human needs and wants go away—food, physical pleasure and pain, etc. It raises the most fundamental questions about what it means to be human, and how this would affect the very meaning and purpose of life.”

Humphreys wonders if humans, long the dominant creature on earth, could accept second-class status to mechanical masterminds. “Part of our conception of ourselves as individuals is our sense of autonomy, including our ability to make our own decisions,” says Humphreys. “Once we off-load decisions to machines, we effectively lose that autonomy. That’s one reason so many users dislike the default Facebook privacy control settings. They believe that once Facebook has access to their personal information, they can no longer control how it is used.”

Despite our fears, we still want the benefits that computers deliver. They’ve become indispensable in conducting our day-to-day lives. Imagine a week without them. Most of us could not do our jobs. We could not shop for our food. We could hardly communicate with each other.

Whitehouse neatly summarizes the conflicting emotions humans sometimes have about the machines around them. “We want computers to be just tools, not entities that we have to negotiate with,” he says. “We want them to be ‘smart’ but also want them under our control. For instances, a common complaint is that Office and Windows computer programs have tools we can’t control, and that’s really frustrating.”

Pfaffenberger says the potential of supercomputers should not scare us. “I don’t believe we can create a computer smarter than we are, but we can create machines that combine the knowledge of millions of people,” he says. An anthropologist by training, Pfaffenberger has been engaged with science and technology studies for more than 20 years.

Pfaffenberger argues that technology is not a thing apart from humans, which seems to be an underlying premise of machine-run-amok scenarios. In fact, the human-technology link is inextricable and indispensable. Assume scientists could make an android. Humans would have to plan it, create its hardware and software, build it and decide how to use it. “That android would bear the imprint of the politics, values and social ideals of the people who created it,” he says.

How about Watson? Pfaffenberger watched the Jeopardy! games and cheered for the people. “Come on,” he says, with a laugh, “whose team do you think I’m on?” But, he noted, if you backed the computer, you really were pulling for people—the IBM team that built it and every human who prepared material that went into the database, including tens of thousands of Wikipedia contributors.

“If we’re frightened of tomorrow’s technology, we shouldn’t be worried about the technology itself,” he says. “We should be worried about the society that builds it. We should think about what social values are imparted to the technology. Will they be agents of social justice, equality and democracy? Or, will they promote wealth for a few at the expense of the many, a despoiled environment and other inequities? That’s where a really horrifying future could come out this for all of us.”