People Stories

The Road Ahead for AI

December 18, 2017

A Mercedes-Benz C-Class already knows our preferences — from where we like the mirrors to be placed and how the seat should be adjusted — but also, the vehicle can predict what radio stations we’re likely to prefer when we’re on the road or what navigation points are relevant to us at that time of the day, based upon our usage patterns. The more we drive, the more the car learns about us — and the more it can predict what we’ll like.

As Principal Machine Learning Engineer at MBRDNA since 2010, Rigel Smiroldo constructs algorithms that make cars a seamless part of our digital lives.

According to Smiroldo, we’re on our way to turning the car into an ultimate user device: interacting with machines, the car in particular, in the same way we can interact with other human beings. Smiroldo explains, most AI technology today learns by example; an AI is taught how to react to certain stimuli by being given examples of stimuli and the proper response to that stimuli. It is then up to the AI itself to find the fundamental patterns driving that behavior and determine the correct way to respond to future stimuli that it was never shown. These pairs of stimuli and gold standard responses goes by the nebulous term “data.” A fundamental challenge, therefore, its in finding the right data for the AI which can teach it to act human-like.

“In the car,” says Smiroldo “the data seen by the AI is the behavioral data of the users of the vehicle. In other words, how the users themselves respond to various stimuli.”

Since the car is being exposed to the particular responses on an individual, in essence, the AI of the car is learning to mimic its particular user’s behavior. “It’s like watching a child following behind you copying your movements.” How is that useful? Smiroldo explains, “Oftentimes the car needs information from the user, such as an address to drive to. In these cases, the system can instead query the AI system for the answer which, having been taught to mimic the user, will give a similar answer that the user would have given. That way, the user doesn’t have to be bothered unless it is something the AI doesn’t feel that it knows.” This is a fundamentally new paradigm in designing user interfaces, as it has to be built to be used both by the human users of the vehicle and by an AI system, which doesn’t have the same distraction requirements and isn’t as easily annoyed.

As an example, your car would learn you typically drive to work at 7:30, Monday through Friday. It learns you listen to the news in the morning. Then it also learns you prefer to take the scenic route to work, not the one typically recommended by the GPS. It also learns you always stop off at a particular coffee shop for an espresso just before arriving in the office parking lot. It then is able to become personal for you, thanks to the data generated by you using the vehicle and can tailor future experiences to these preferences.

Navigating Subtasks

Smiroldo arrived in the bourgeoning field of machine learning at a key moment, in 2009. After graduating from the University of California, Berkeley, with a focus on artificial intelligence and computer simulation, he’s seen plenty of progress in AI. Now, he says we’re “perhaps seven years away” from AI systems being able to do truly complicated things like navigating their way through “a deep hierarchy of sub-objectives, a goal of a field known as hierarchical reinforcement learning.” Smiroldo and his team are working on technology smart enough to assess multiple things at once, such as lip-reading, and assessing the information the cameras are capturing.

“The systems are superhuman at specific tasks — they’re better than we are in many domains from game playing to object recognition.

Facial recognition is just an example of a challenging task recently crossed off the ‘machines are worse than humans’ list thanks to advancements in AI. But still, the big missing piece is for machines to be able to autonomously break up complex goals into a series of simple tasks.”

Teenagers with brand-new licenses may drive to the mall, but it’s actually a complex effort for AI systems to accomplish. “There are millions of subtasks, or ‘sub-objectives,’ involved: starting the car; leaving the parking space; backing down the driveway; yielding for oncoming traffic; avoiding cyclists; and getting into the proper lane — and that’s before getting on the highway. Humans can break up these individual tasks in sub-objectives, reorder tasks and still achieve the end-goal.” But for AI systems, “it gets trickier when there’s a million different sub-objectives entailed within a seemingly simple goal like going to the mall. The AI system in the car needs to have automatically learned to solve a wide library of simple tasks, as well as identify a high-level task — and be able frame the high-level task as a combination of the simple tasks,” Smiroldo adds. “It’s not trivial.”

Building Trust

It’ll be fascinating to witness the dawn of a new era — as Smiroldo says, it’s exciting times for his team at MBRDNA.

“What if our cars end up super-human in nearly every task?”

Smiroldo asks. “What if we had an automated co-pilot that can run everything — and it could learn who the driver is, give directions, and even had a conversation with that driver? What if we could trust the system to book a restaurant for dinner, but also let us feel it will be able to distinguish what type of food we like, or what time makes sense for us to have dinner — and also trust the system won’t pick a $500-per-person restaurant for us?”

Right now, we’re still building that trust, because AI systems don’t have the thing we humans call “common sense.” That’s because common sense is actually a shared human experience – data that is common between most people alive today. There will probably always be some kind of a cultural difference between us and our machines since the experiences (data) of the machine may never be identical to the experiences of humans. The way around that is for our machines to learn more about us, and adapt.

“We want to get to a point where the car learns the customer, rather than the customer learns about the car.”

The road ahead is promising, but it’s filled with plenty of twists and turns. Smiroldo and his team at MBRDNA promise to be there at every turn.

You can read more about the machine learning team’s groundbreaking work on AI here.