"Her"
The trailer: http://www.youtube.com/watch?v=WzV6mXIOVl4
***
Even when our debates seem petty, you can’t say national politics doesn’t deal with weighty matters, from jobs to inequality to affordable health care and more. But lately I’ve become obsessed with an issue so daunting it makes even the biggest “normal” questions of public life seem tiny. I’m talking about the risks posed by “runaway” artificial intelligence (AI). What happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand? Are we creating machines that are destined to destroy us?
I know when I put it this way it sounds like science fiction, or the ravings of a crank. So let me explain how I came to put this on your screen.
Matt Miller
A senior fellow at the Center for American Progress and the host of the new podcast “This...Is Interesting,” Miller writes a weekly column for The Post.
A few years ago I read chunks of Ray Kurzweil’s book “The Singularity Is Near.” Kurzweil argued that what sets our age apart from all previous ones is theaccelerating pace of technological advance — an acceleration made possible by the digitization of everything. Because of this unprecedented pace of change, he said, we’re just a few decades away from basically meshing with computers and transcending human biology (think Google, only much better, inside your head). This development will supercharge notions of “intelligence,” Kurzweil predicted, and even make it possible to upload digitized versions of our brains to the cloud so that some form of “us” lives forever.
Mind-blowing and unsettling stuff, to say the least. If Kurzweil’s right, I recall thinking, what should I tell my daughter about how to live — or even about what it means to be human?
Kurzweil has since become enshrined as America’s uber-optimist on these trends. He and other evangelists say accelerating technology will soon equip us to solve our greatest energy, education, health and climate challenges en route to extending the human lifespan indefinitely.
But a camp of worrywarts has sprung up as well. The skeptics fear that a toxic mix of artificial intelligence, robotics and bio- and nanotechnology could make previous threats of nuclear devastation seem “easy” to manage by comparison. These people aren’t cranks. They’re folks like Jaan Tallinn, the 41-year-old Estonian programming whiz who helped create Skype and now fears he’s more likely to die from some AI advance run amok than from cancer or heart disease. Or Lord Martin Rees, a dean of Britain’s science establishment whose last book bore the upbeat title, “Our Final Century ” and who with Tallinn has launched the Center for the Study of Existential Risk at Cambridge to think through how bad things could get and what to do about it.
Now comes James Barrat with a new book — “Our Final Invention: Artificial Intelligence and the End of the Human Era” — that accessibly chronicles these risks and how a number of top AI researchers and observers see them. If you read just one book that makes you confront scary high-tech realities that we’ll soon have no choice but to address, make it this one.
In an interview the other day for my podcast show “This...Is Interesting,” Barrat, an Annapolis-based documentary filmmaker, noted that every technology since fire has brought both promise and peril. How should we weigh the balance with AI?
It turns out that in talking with dozens in the field, Barrat found that everyone is aware of the potential risks of “runaway AI,” but no one spends any time on it. Why not? Barrat surmised that “normalcy bias” — which holds that if something awful hasn’t happened until now it probably won’t happen in the future — accounts for the silence.
Many AI researchers simply assume we’ll be able to build “friendly AI,” systems that are programmed with our values and with respect for humans as their creators. When pressed, however, most researchers admit to Barrat that this is wishful thinking.
The better question may be this: Once our machines become literally millions or trillions of times smarter than we are (in terms of processing power and the capabilities this enables), what reason is there to think they’ll view us any differently than we view ants or pets?
The military applications of AI guarantee a new arms race, which the Pentagon and the Chinese are already quietly engaged in. AI’s endless commercial applications assure an equally competitive sprint by major firms. IBM, Barrat said, has been laudably transparent with its plans to turn its Jeopardy-playing “Watson” into a peerless medical diagnostician. But Google — which hired Kurzweil earlier this year as director of engineering, and which also has a former head of the Pentagon’s advanced research agency on the payroll — isn’t talking.
Meanwhile, the military is already debating the ethical implications of giving autonomous drones the authority to use lethal force without human intervention. Barrat sees the coming AI crisis as analogous to nuclear fission and recombinant DNA, which inspired passionate debates over how to pursue these technologies responsibly.
This spring Hollywood will weigh in with “Transcendence,” starring Johnny Depp as an AI researcher targeted by violent extremists who think we’re crossing a Rubicon that will be a disaster for humanity.
At the end of our interview, I asked Barrat what I thought was a joke. I know you’ve got a grim view of what may lie ahead, I said, but does that mean you’re buying property for your family on a desert island just in case?
“I don’t want to really scare you,” he said, after half a chuckle. “But it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses” to which they could flee if it all hits the fan.
Whoa.
It’s time to take this conversation beyond a few hundred technology sector insiders or even those reached by Barrat’s indispensable wake-up call. In his State of the Union address next month, President Obama should set up a presidential commission on the promise and perils of artificial intelligence to kick-start the national debate AI’s future demands.
Read more from Matt Miller’s archive