BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Grappling With Risk-O-Meters For Self-Driving Cars

Following
This article is more than 4 years old.

Getty

You are a finely tuned risk calculator.

That’s right, when you are driving a car you are in real-time having to figure out the risks of that pedestrian suddenly darting into the street, or the car ahead of you unexpectedly slamming on its brakes, or having to ascertain the odds that the cute kitty cat at the corner might inexplicably wander into the roadway in front of your car (don’t hit the feline, please!).

What makes this a particularly hard-mental exercise is that you are trying to predict the future.

Most of the time, you don’t know for sure that the pedestrian is going to foolishly leap in your path, and instead you need to use clues to guess at what might happen. Does the person seem poised to enter into the street and are they perhaps not paying attention to the traffic? If the person doesn’t make eye contact with you, while you are driving, you can’t be absolutely sure that they understand the gravity of the situation.

Think about the dozens, maybe hundreds, sometimes even thousands of off-the-cuff risk assessments you make during a driving journey.

I commute to work in Los Angeles traffic and use a variety of freeways and local streets to do so. As such, I encounter numerous instances of maniac cars around me, bicyclists nearby, motorcyclists weaving throughout traffic, adult pedestrians, young kids darting onto streets, dogs and cats, along with debris falling ahead of me such as the other day a pick-up truck dropped several cans of paint onto the road in front of me (my car ended-up with white paint splattered across the underbody, which I suppose is like getting a “free” paint job, lucky me).

You likely aren’t even explicitly aware of how much risk calculating you do, but it’s happening constantly while at the steering wheel.

If you observe a novice teenage driver, you can oftentimes see the expressions and contortions on their face as they are trying to piece together a complex calculus of what is going on around them.

In helping my own children to drive, they sometimes asked me whether I thought that a particular situation was going to develop. For example, a gaggle of kids walking on the sidewalk that were moving as a mass, and for which one of them was trying to urge them to all jaywalk. Would the “leader” prevail, and they might spill into the roadway, blocking traffic like a herd of cattle, or would the group resist and decide that it was better to cross at the crosswalk?

In the spur of the moment, these kinds of driving predicaments need to be analyzed, quickly, unhesitatingly, since any delay in what you do as a driver can lead to calamity.

You might be saying that you don’t find yourself agonizing like this and are puzzled that I am making such a claim. Keep in mind that seasoned drivers gradually become accustomed to making those decisions, almost reflexively so, and appear to magically and seamlessly spot a situation and nearly instantaneously react. The bulk of the time we all seem to be making pretty good decisions and rendering reasonably accurate risk assessment, though of course we don’t do so all of the time.

In the United States alone, there are about 6.3 million reported car crashes or accidents each year (I say “reported” since there could be even more accidents taking place and yet aren’t being formally reported). Sadly, each year in the U.S. there are approximately 2.5 million injuries and 37,000 people killed due to car accidents. Much of the car crashes could have potentially been averted if the human driver was doing a better job of assessing risks and making appropriate driving choices.

Overall, it is crucial to realize that driving a car is not a black-or-white on-or-off kind of task.

There is haziness and fuzziness in the driving situations that we face. You try to make as best an evaluation of the risks involved at every moment of driving and have to live with your assessments. Sometimes you cut things pretty close and escape a bad situation by the skin of your teeth. Other times you misjudge and scrape against something or someone, or worse.

Usually, you are mentally updating the risk aspects as the driving effort is underway, such as the case of the morass of kids that might jaywalk, the leader prodding them gave up doing so, and thus the risk of them meandering out into the street was lessened. Note though that the risk did not drop to zero risk, since they could still have opted to enter into the roadway, whether randomly wandering out or because of some other factor that might prompt them to do so (say a maniacal clown appeared from behind a tree, scaring the kids into scattering onto the busy street, in spite of the oncoming car traffic, or maybe I’ve seen too many scary movies).

For self-driving driverless autonomous cars, one of the hardest and most vexing aspects for AI developers involves being able to craft an AI driving system that’s a reliable, verifiable, robust and all-encompassing risk calculator, of which I’ll inventively refer to this AI capability as a kind of risk-o-meter.

Let’s unpack why a risk-o-meter is problematic to devise and field.

Risk-O-Meter Complexities Galore

Consider the wide range of factors that you take into account when trying to gauge roadway risks while driving your car.

The obvious factors include other cars, nearby scooters, bicycles, motorcycles, pogo stick riders, and other in-motion objects and artifacts. I’ve already mentioned the in-motion antics of pedestrians, young and old, along with the possibility of encountering dogs, cats, deer, wolves, and alligators (yes, alligators will wander onto the roads in places like Florida, as evidenced by one that did so when I took my kids to visit the theme parks in Orlando).

An in-motion object is more likely to catch your attention, though objects not yet in motion have to also be accounted for.

A tree the other day in my community decided that it was time to have a tree limb break-off and fall to the road. Someone was driving nearby the tree, doing so at the unluckiest of moments, and managed to hit the gas of the car and avoided the heavy and quite lethal tree limb.

Would you have paid any attention to a stationary and presumably static object like a tree, in terms of having to consider any risks associated with the tree while you are driving down the street?

Unless you’ve perchance had a tree become uprooted by a storm, I’d wager that you would not have put much cognitive effort toward assessing the risks of the tree disrupting your driving effort. This provides an example of how a seemingly motionless object can become surprisingly an impediment to driving.

Getting an AI system to do the same kind of risk assessments is very hard.

Today’s AI systems lack any kind of common-sense reasoning, meaning that they don’t “understand” the nature of objects and what they do and don’t do. The AI has no cognitive capability to realize that people on scooters might veer in front of the car, or that a duck could decide to exit from the nearby pond and waddle into the street. None of these facets are somehow inbred or coded into the AI system.

The vaunted Machine Learning (ML) and Deep Learning (DL) that tends to use Artificial Neural Networks (ANN) is not overcoming per se the lack of common-sense reasoning. These AI capabilities are pattern matchers that try to find recurring patterns in large datasets. If there are enough instances of people riding scooters that veer into traffic, the ML or DL can potentially pick-up on the pattern and therefore be alerted when a scooter is detected in the traffic scene, but suppose the training dataset didn’t have those scooter instances, and same goes for the waddling duck.

Of course, the other avenue involves programming the AI to put two-plus-two together that if an object is in motion, such as a scooter, it is worthwhile to figure out the speed, direction, and likelihood that the scooter will intersect with the path of the car. This is a relatively mathematically straightforward operation. Yet it does not include the intention of the scooter rider. You would likely look at the scooter rider and try to guess as to the motives and intentions of the person, doing so to further enhance and embellish your risk assessment.

Devising a mathematical formula for the risk-o-meter is not easy and involves taking into account a myriad of factors, some of which might be important, some of which might be unimportant, and all of which requires juggling hunches and having to on-the-fly determine what’s happening and what might happen next.

ASIL And Determining Risk-O-Meter Levels

A well-known standard in the automotive world is the Automotive Safety Integrity Level (ASIL) risk classification scheme, based on an official document referred to as ISO 26262 (there are other related automotive safety standards too, including the forthcoming ISO 21448 or SOTIF). When determining risk while driving, here’s an equation that provides a means to get your arms around risk aspects:

• Risk = Severity x (Exposure x Controllability)

Severity is important to consider when ascertaining risk while driving, since you might be heading toward a brick wall that will end up causing you and your passengers to get smashed and killed (that’s a high severity), or the hitting of paint cans on the freeway might be relatively low in severity (it, fortunately, did not cause any damage under my car and I rolled over them without missing a beat).

Formally, severity is a measure of the potential harm that can arise and can be categorized into: (S0) No injuries, (S1) Light and moderate injuries, (S2) Severe injuries, (S3) Life-threatening and fatal injuries.

Exposure is whether the chances of the incident occurring are substantial versus being unlikely as to you being exposed to the matter (i.e., the state of being in an operational situation of a hazardous nature), such as my rolling over the paint cans was nearly certain as they popped off the truck and into the roadway without any advanced warning, while the alligator that we saw on the road was visible well in-advance and readily avoidable.

Formally, exposure can be divided into: (E0) negligible, (E1) very low, (E2) low, (E3) medium, (E4) high.

Controllability refers to the capability of being able to maneuver the car so as to avoid the pending calamity. This can range from avoiding the situation entirely or be that you can skirt it, or that no matter what you do there is insufficient means to steer, brake, or accelerate and avert the moment.

Formally, controllability can be divided into: (C0) generally controllable, (C1) simply controllable, (C2) normally controllable, (C3) difficult or uncontrollable.

By combining together the three overarching factors of severity, exposure, and controllability, you can arrive at an indication of the risk assessment for a given driving situation. Presumably, we do this in our heads, cognitively, though how we actually do so and whether we even use this kind of explicit logic is debatable since no one really knows how our minds work in this capacity.

Don’t be misled into looking at the formula and construing that therefore it is simple to encode this into an AI system. There is a tremendous amount of judgment that goes into how you as a human classify the exposure, the severity, and the controllability.

Conclusion

There is a famous AI ethics discussion point around something called the Trolley Problem. In short, imagine a scenario in which you were able to steer a trolley toward one of two train tracks and at the end of those tracks were people that would get hit by the trolley. If one track led to say five people while the other track led to only one person, which way would you steer the trolley?

I mention this philosophical question because autonomous cars are going to confront the same kind of issues when driving on our roadways. There will be situations in which the risks of hitting perhaps a child darting into the street might need to be weighed against the risks of swerving the car to avoid hitting the child but then have the car rollover or ram into a post or wall, injuring or killing the passengers within the driverless car.

What should the AI decide to do?

You can perhaps now discern why the risk-o-meter is a crucial element for the AI system of an autonomous car and also see why it is a difficult capability to concoct. The next time that you come near to a driverless car, perhaps one that is being tried out on your local public roadways, realize that within the AI system and the coding and the ML or DL, there is something that is undertaking a risk-o-meter like effort and deciding where and what the AI should do about maneuvering the car.

There’s no magic involved, it’s all software and hardware, and no Wizard of Oz standing behind the curtain.