What AI Can and Can’t Do: DARPA’s Realistic View

Is artificial intelligence a risk to humanity? DARPA weighs in on what it AI can do, what it can’t do, and where it is headed.

Is artificial intelligence (AI) a risk to humanity? It depends on who you ask.

The Future of Life Institute certainly thinks so. It recently unveiled its Asilomar AI Principles, a set of 23 guidelines established by AI experts, roboticists, and tech leaders, to ensure the development of ethical, safe AI. Bill Gates, Stephen Hawking, Elon Musk and many others think so.

Rodney Brooks, co-founder of iRobot and founder, chairman and CTO of Rethink Robotics, is in the other camp. He considers AI more of a tool than a threat. He’s on the record saying that “[AI systems] all very narrow tools: solve this sub-problem, solve this sub-problem. They don’t have the intentionality, really, even of an insect, let a lone a small mammal.” There are many who agree with Brooks, too.

Now you can add DARPA to the list of experts who think AI is being overblown. “There’s been a lot of hype and bluster about AI,” John Launchbury, director of DARPA’s Information Innovation Office (I2O), says in the YouTube video at the top of this page. “[There has been] talk about a Singularity, that AI will exceed the capability of human beings, maybe even displace humanity. We’re going to take a much more level-headed approach and attempt to demystify.”

Launchbury breaks AI into three waves to explain what AI can do, what AI can’t do, and where AI is headed. We’ve broken down the three waves of AI below, but we highly suggest watching the video as it provides a great look at the current state of AI. You can download the slides from this DARPA presentation here (PDF).

1. Handcrafted Knowledge

First Wave AI

The first wave of AI from DARPA is “Handcrafted Knowledge,” which Launchbury says are systems built by experts who took knowledge of a domain and characterized it with rules the computer will understand. The computer then studies the implications of those rules. Launchbury uses logistics (scheduling), games (chess) and tax software (TurboTax) as examples of first wave AI systems.

Launchbury says first wave AI systems are good at logical reasoning - taking facts of a concrete situation and working through them - but they’re only good at reasoning through narrowly defined problems, don’t have the ability to learn over time, and handle uncertainty poorly.

2. Statistical Learning

DARPA’s second wave of AI is called “Statistical Learning,” which Launchbury defines as AI systems that are very good at perceiving the natural world and adapting to different situations. He explains that these types of AI systems are used for voice recognition and facial recognition.

The limitations of these systems include very limited reasoning capability, unlike the first wave of AI, and nuanced capabilities and prediction capabilities. Launchbury says Microsoft’s Tay.ai Twitterbot is a good example of the limitations of these systems. The Twitterbot was designed to understand conversational language among young people online and learn how to engage in playful conversations.

However, Microsoft shut down Tay.ai just 16 hours after it launched for offensive tweets. Using information from the tweets it was receiving, Tay.ai learned how to be racist. Launchbury says this shows the importance of the training data AIs use to become smarter and how “skewed training data creates maladaptation.”

3. Contextual Adaptation

DARPA’s third wave of AI is called “Contextual Adaptation” and, he says, is where AI needs to go to move beyond the simple calculations we’re seeing in spreadsheets. He says the first two waves of AI aren’t sufficient.

Contextual Adaptation combines the first two waves of AI and will be able to learn over time and understand why it makes certain decisions.

About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe.


Log in to leave a Comment

Editors’ Picks

Top 10 AI & Robot Stories of 2017
2018 is shaping up to be a massive year for automation. But first,...

Robots are Learning to Pick up Objects Like Babies
UC Berkeley has developed a technique that enables robots to complete tasks...

Self-Driving Taxis Giving Rides During CES 2018
Aptiv and Lyft have teamed up to give CES 2018 attendees self-driving taxi...

Roombas Will Help Clean up Your Home’s Weak WiFi
iRobot's top-tier Roomba robot vacuums will soon be able to sweep your...