Rodney Brooks: AI Systems “Very Narrow Tools”

Brooks says, "People hear AI and they think they're these smart machines with wants and desires and capabilities. They don't have any of that. I wish they did, but that's not what we've been able to do in the last, really, 55 years of AI research."

Yesterday we shared the words of Gill Pratt, program manager at the Defense Advanced Research Projects Agency (DARPA), who rebuffed the notion of killer robots by saying we should be more afraid of smartphones and those who can access them.

Today we pass along the wisdom of Rodney Brooks, co-founder of iRobot and founder, chairman and CTO of Rethink Robotics. The guy knows a thing or two about robotics, and Rethink’s Baxter and Sawyer robots employ artificial intelligence.

So what does Brooks think about AI and speculation about its potential dangers? Well, safe to say he considers AI more of a tool than a threat. Here’s what Brooks had to say in a recent video (watch below):

“None of our AI systems have any intentionality.  They’re all very narrow tools: solve this sub-problem, solve this sub-problem. They don’t have the intentionality, really, even of an insect, let a lone a small mammal. People hear AI and they think they’re these smart machines with wants and desires and capabilities. They don’t have any of that. I wish they did, but that’s not what we’ve been able to do in the last, really, 55 years of AI research.”

As Brooks has said in the past, a lot of the anti-AI crowd are high-profile people in the technology industry, not the AI industry. Brooks wrote a blog in Nov. 2014 addressing AI fears, and the following is too good to not re-share:

” I say relax. Chill.  This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

“By the way, this is not a new fear, and we’ve seen it played out in movies for a long time, from “2001: A Space Odyssey”, in 1968, “Colossus: The Forbin Project” in 1970, through many others, and then “I, Robot” in 2004. In all cases a computer decided that humans couldn’t be trusted to run things and started murdering them.  The computer knew better than the people who built them, so it started killing them.  (Fortunately that doesn’t happen with most teenagers, who always know better than the parents who built them.)

“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.”

About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe:  ·  View More by Steve Crowe.


Log in to leave a Comment

Editors’ Picks

Top 10 AI & Robot Stories of 2017
2018 is shaping up to be a massive year for automation. But first,...

Robots are Learning to Pick up Objects Like Babies
UC Berkeley has developed a technique that enables robots to complete tasks...

Self-Driving Taxis Giving Rides During CES 2018
Aptiv and Lyft have teamed up to give CES 2018 attendees self-driving taxi...

Roombas Will Help Clean up Your Home’s Weak WiFi
iRobot's top-tier Roomba robot vacuums will soon be able to sweep your...