First, here's a recent piece by Greg Lindsay in the New York Times, about a city in New Mexico that, as he puts it, will be "populated entirely by robots." Not true, strictly speaking. But I checked with Greg, and he said that the city — established to test various intelligent systems, an idea that Greg isn't crazy about — will feature autonomous vehicles. Sounds robot enough for me.
At Slate, Farhad Manjoo is in the middle of rolling out a series about the coming robot invasion of the workplace. And not just the factory floor. No, we're taling about doctors and — Eek! —bloggers.
I think he's undertaking this to provide some intellectual background for a New America Foundation "Future Tense" symposium on the prospects of robots "stealing our jobs." Manjoo is moderating. Tyler Cowen will be present. So will Martin Ford, who writes the econfuture blog and has also produced a free e-book, The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.
I just downloaded it and haven't had a chance to dig in. But Ford did has been getting his ideas out to the media, via the New York Times, Fortune, and CNBC. I've included his CNBC appearance from back in March above.
I'm all for this discussion, but as much as it's coming from the futurist/tech realm, it's also tinged with a certain amount of worry about the arrival of machine intelligence. That is, the arrival of the kind of serious machine intelligence, coupled with advanced robotics, that could cause major economic disruption.
Here's Manjoo assessment:
What I found was unsettling. They might not know it yet, but some of the most educated workers in the nation are engaged in a fierce battle with machines. As computers get better at processing and understanding language and at approximating human problem-solving skills, they're putting a number of professions in peril. Those at risk include doctors, lawyers, pharmacists, scientists, and creative professionals—even writers like myself.It should be fairly obvious to anyone who's been paying attention for the past few decades that machine intelligence is on the rise. In his book, Ford talks about the arrival of strong artificial intelligence — by which he means AI that's just as good if not better than human intelligence — as a phenomenon akin to having an alien form of life appear in our midst. But would it, or should it, really be that shocking? Machine intelligence can fly planes, drive cars, and engage is some less productive but more provocative pursuits, such as winning at Jeopardy! or defeating world champions at chess.
I feel pretty strongly that machine intelligence and robots will not displace human workers so much as merge with them. Manjoo's dispatches tell us that the machines are making inroads. They'll probably keep doing it. But what I think that implies isn't so much dislocation and unemployment as collaboration with a new quasi-species. The machines won't be aliens. Rather, they'll be a lot like us, even if they don't assume android form.
Consequently, it's going to be imperative that we start thinking about robot ethics. Workers displaced by machines may not be too happy about it. But do machines deserve the opportunity to compete for those jobs? I think they do. Should machines eventually be compensated for that they do? This raises some thoroughly out-there economic questions, but it's entirely plausible that they can make legitimate claims — or that they could have claims made for them.
In the end, I'm concerned that we're treating machine intelligence as an threatening advancement of technology rather that a kind of new and creative form of evolution. We've had a hard enough time figuring out what our ethical relationship with the sentient entities that we share the planet with — animals — a problem highlighted by the ethicist and philosopher Peter Singer in his seminal work, Animal Liberation. But animals have always shared out physical space while existing "below" us in what we might nostalgically still want to call the Great Chain of Being.
It was easy, although far from ethically effortless, for us to distinguish ourselves from them, a phenomenon that Singer calls "speciesism." It won't be so simple for us to do this with man-made intelligence that's actually superior, in a technical sense, to our own. We'd be engaging in speciesism of a different sort, and from a parallel if not inferior position.
This is why I've been thinking a lot about robot liberation over the past few years and am now finally starting to lay some of my thoughts out. Don't get me wrong — I'm encouraged to see writers and economists tackling the question to how machine intelligence will exist in the economy of the future. I'm also delighted that I can at last discuss these issues without seeming like a complete whacko.
That said, we'd be making better process if we stopped thinking about what the robots will take from us, and what we can give to them