The AI Knowledge Battle: Zuckerberg vs. Musk
Some of the greatest minds of our generation don’t agree on the safety of artificial intelligence (AI)—and they aren’t hiding it, either. Elon Musk, the founder of both SpaceX and Tesla, has made numerous pronouncements regarding the dangerous impact AI could have on human society. In fact, in a recent meeting of U.S. governors, he wasn’t subtle about it either. “Until people see robots going down the street killing people, they don’t know how to react,” he said. As for Facebook’s Mark Zuckerberg: he called Musk’s statement “pretty irresponsible” during a recent Facebook Live event. In fact, his view of AI seems nothing short of enthusiastic. So, who is right?
The more we dig into the AI issue, the murkier it gets for many. Stephen Hawking has said AI is either the best thing to happen to human society, or the worst. The truth is, none of us knows exactly what AI will bring. In the end, the technology’s advancement will depend on humans, and how responsible we are with it. Which brings me to another point: humans don’t really know how AI works.
Last month, MIT Technology Review published an article outlining the fact that many of the developers driving AI don’t actually know how the deep learning algorithms supporting it work. It’s one thing to develop a tool you know how to manage. It’s another to develop a thinking, moving object that can make its own independent decisions, without any idea how to control it. In my article, The Ethical Side of Artificial Intelligence, I touched on some of these issues. We’ve already seen what happens when you don’t know how AI works. Last week Facebook had to shut down two chatbots after they started using a language even the programmers didn’t understand.
Regardless of whose “side” you’re on in the AI war, the truth remains: it isn’t going anywhere, at least for now. Below we review a few of its potential outcomes—and what they could mean for the human race.
Team Zuckerberg: AI Gives Humans the Ultimate Life
First, let’s start with the positives. AI holds tremendous potential to keep humans safer and healthier—and to make our lives easier in the process. AI is already making self-driving cars a reality—something many say will eliminate car crashes, which kill more than a million people each year. That’s got to be good, right? But there’s so much more than that. AI could be used in place of humans on the battlefield, in place of humans in dangerous law enforcement situations, and even to take on dangerous tasks in the workplace. And let’s not even get started on AI’s potential to serve as personal assistants, managing all the menial tasks in our daily lives.
Team Musk: AI Will Take Our Jobs and Take Over the World
As I’ve written before in my piece Artificial Intelligence and Automation: Predictions for the Future, AI will definitely take jobs in the next decade. Studies show 40 percent of U.S. jobs are at risk for automation by the 2030s, and more than 85 percent of customer interactions will be handled by AI by 2020. To that, I say, “And?” Studies also show some 85 percent of the jobs we will be doing in 2030 don’t even exist yet. Clearly, new jobs are on the horizon with AI.
As for the concept that AI will create robots stronger and smarter than humans—robots that will take over the world and leave us for dead—I can see the cause for concern. Although I agree with Zuckerberg that AI holds tremendous potential to save lives—and make lives easier—I also agree that humans have been known to make terrible decisions. Musk has called for proactive government regulation of AI development, and I am completely onboard, provided the regulation doesn’t slow down the development of life-saving, customer-serving technology that could improve the lives of people around the world.
None of us—no matter how smart or famous we may be—can accurately assess what AI will do for—or to—humanity. But the same could have been said for the Industrial Revolution or any other revolution we’ve seen throughout history. The unknown is always scary. It’s especially more so when it involves technology that—for the first time—has the potential to outsmart humans. Our only hope is that the experts stop arguing—and start working together to find a smart and viable path to development.