Rise of the Machines: Cambridge University to Study Technology’s ‘Existential Risk’ to Mankind

  • Share
  • Read Later
BERTRAND GUAY/AFP/Getty Images

Robots from the Metropolis film by Fritz Lang displayed during an exhibition entitled "Et l'homme crea le robot" ("And man created the robot") at the Arts et Metiers Museum in Paris.

While many Americans were finalizing preparations for Thanksgiving on Nov. 21, the U.S. Deputy Defense Secretary Ashton Carter signed a new policy directive aimed at reducing the risk and “consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” In other words, the Pentagon was making sure that the U.S. military doesn’t end up in a situation where robots are able to decide whether to pull the trigger on a human.

If you’ve seen The Matrix, The Terminator or even 2001: A Space Odyssey, you know one thing is inevitable: The machines are coming, and someday they’re going to kill us all. And given the recent proliferation and sophistication of military drones and other automated weapons systems, that future could be getting closer than we think.

(MORE: TIME 1980 – The Robot Revolution)

In an attempt to head that future off, Terminator-like, Cambridge University has announced it’s setting up a center next year devoted to the study of technology and “existential risk” — the threat that advances in artificial intelligence, biotechnology and other fields could pose to mankind’s very existence.

The Cambridge Project on Existential Risk is the brainchild of two Cambridge academics — philosophy professor Huw Price and professor of cosmology and astrophysics Martin Rees — as well as Estonian tech entrepreneur Jann Tallinn, a co-founder of Skype. The center hopes to train a scientific eye on the philosophical issues posed by human technology and whether they could result in “extinction-level risks to our species as a whole”.

(MORERobot with Human Skeleton Steps Toward Artificial Intelligence)

Price tells TIME that while our demise at the hands of our own technological creations has long been the subject of Hollywood films and science fiction (again: Terminator), it is something that has hitherto seen little serious scientific investigation:

“I enjoy those science fiction films, but the success of those movies has contributed in a way to making these issues seem not entirely serious. We want to make the point that there is a serious side to this too.”

Take, for example, the still little-understood flash crash of May 6, 2010. In just six minutes, automated trades executed by computers caused one of the biggest single-day declines in the history of the Dow Jones Industrial Average, causing the stock index to plummet almost 1,000 points, only to recover again within minutes. The dip caused alarm among regulators who realized that this technology — lightning-fast trades set to execute based on computerized analysis of market conditions — is already in many ways beyond our control.

(MORE: Money Talking: How High-Frequency Trading is Impacting Your Investments)

Price says that advances in biotechnology —a specialty of his colleague Rees — are equally concerning; thanks to new innovations, the steps necessary to produce a weaponized virus or other bioterror agent have been dramatically simplified. “As technology progresses,” Price says, “the number of individuals needed to wipe us all out is declining quite steeply.” His words echo that of the scientists involved in a seemingly harmless genetic parlor trick from earlier this year — in which they encoded the text of a book in DNA — who acknowledged that the same technology could perhaps be used to encode a lethal virus.

Price emphasizes that the focus of his work won’t just be on artificial intelligence, insisting that the center would look more widely at how human technology could threaten our species. But he admits that AI is nevertheless something he finds “quite fascinating”:

“The way I see it as a philosopher is that more than anything else, what distinguishes us as humans is our intelligence, and this has been a constant throughout history. What seems likely is that this constancy is going to change at some point in the next couple of centuries, and it is going to be one of the most fascinating phases in our history.”

That future, too, is closer than we think. The New York Times devoted a page one story on Nov. 24 to the advances in an artificial intelligence technology known as deep learning, already used in programs like Apple’s Siri. The machine learning system, modeled after the network of neural connections in the brain, illustrates how close we are to mimicking human intelligence in computers. The real worry, however, is that once we do achieve that milestone, how long our dominance over our creations can last.

MORE: Unlocking the Matrix

7 comments
PRADEEP1951
PRADEEP1951

Itappears that we humans have been created here on earth, in a firststep, like rearing a life inside a cocoon. There seems no doubt that infurther steps we will be bequeathing or imparting our intelligence to anew more capable and hardy species-namely machines. Man is too fragilefor space and its daunting surroundings. So in a natural progression ourintelligence will be taken over by machines which are more robust forhardy space. But if the transfer is premature, the machines will failand also we may be exterminated by them. This study is welcome, whichwill give better understanding how this empowering process of machineswill be progressing. And may also reveal to us, what role we humans areto play in the near and distant future.

jwilder2
jwilder2

"Thou shalt not make a machine in the likeness of a human mind"

MelStricker
MelStricker

Another new research project to get research money to justify the 'Publish or Perish' mentality of university professors.  I wonder how many years it will take to write a paper on this project and, since technology futurists seem to get it wrong most of the time, how off the mark this study will be.  

One cannot see how technology will be implemented because the technology developed is, many or most times, used in a way the developers never intended nor foresaw.

MarcShakter
MarcShakter

If it's not the tech that will kill us it will be us destroying ecosystems and water supplies wherever we go. When the fish tank gets too full, all the fish die.

kjamme
kjamme like.author.displayName 1 Like

No mention was given about how technology has weakened us physically and mentally, this is the larger threat to our exixtence.  We are so insulated from the natural world by our technology, the world that sustains us, that we have lost our primal strength.

RickFromTexas
RickFromTexas

I don't feel we're in any imminent danger of Terminators, however, I do feel that working with Microsoft software is threatening my sanity.

Thank you.

allenwoll
allenwoll

The "Rise of Tech" is a Favorite Fear among the UN-imaginative and the Thoughtless ! !

.

Our Pols will be using that fear against EVERYONE  ! ! ! . (For Tech is a threat to Pols !)