Don’t look now, but while our attention is being distracted by all the media focus on climate change, there’s a much more ominous threat on the horizon: artificial intelligence ( AI). Unlike climate change, AI is not an unintended consequence of technology but something fellow humans are working hard to bring about.
The goal the very smart people working on AI is the end of the “human era,” as it being called , and the start of the “transhumanist era,” in which the dominant creatures will be not us but our computers. Our machines will not only be able to beat us in chess and driving—they will be better at everything than we are. We will have been rendered obsolete. They will be to us, one analogy has it, as we to gorillas.
If it’s not all a Silicon Valley delusion of grandeur, the end of the human era could be here within a generation or two.
The transhuman future sees us retiring, as it were, at birth, since there will be nothing we relatively dimwitted ancestors can do that our robotic offspring can’t do better. ( It is hoped that AI will take care of us as properly brought up children take care of aging parents.) Another scenario has us uploading ourselves into these genius-level progeny of ours for an immortality of some sort.
This future is actually being hailed with enthusiasm by those who think of themselves as “transhumanists.”
Although intended, like all technology, as a servant to improve human life by solving problems such as cancer or climate change, AI would by definition transcend the role of servant. According to Stephen Hawking, “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate.” Taking off on its own, it might come up with the brilliant solution to cancer of simply eliminating the outmoded, vulnerable creature for whom cancer is a problem. (AI itself would of course not be susceptible) .
Dystopian outcomes have long been worried about in science fiction movies such as “I, Robot” and some in the AI field such as Elon Musk are concerned enough to have started thinking about how to program our computers to have only human welfare at “heart.”But that effort would seem to involve us in knotty contradictions.
For example, a dilemma facing the self-driving car industry: how to program cars to react to a bunch of kids thoughtlessly darting into the road ahead. If the car keeps on going it will mow down a bunch of kids. If it swerves left or right to avoid them, it may well kill the owner [(and mess up the car). Tough decision for a human driver. But (according to a LinkedIn blog by John Battelle) Mercedes Benz has chosen in its programming to sacrifice the kids in favor of the owner because “ let’s be honest — who wants to buy an autonomous car that might choose to kill you in any given situation?”
What if we don’t want to trust profit-motivated companies—let alone the self-programming computers of the AI future–to solve those moral dilemmas for us?
What if we of the human era don’t like the sound of retirement at birth or uploading ourselves onto a computer?
What if we don’t even want machines to do all the heavy lifting because, as embodied creatures, we require meaningful physical engagement with the world around us?
Trusting the dubiously motivated moguls of Silicon Valley with the design of the human future is a little like trusting the oil industry to lead the charge on climate change. We have widely publicized summits on climate change.Where is the world summit to begin addressing disturbing aspects of the planned obsolescence of the human era? I’d start by getting rid of that dismissive “human era” phrase itself.
No Comments