BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Needs To Gain Famed MacGyver Shrewdness, Including For AI Self-Driving Cars

Following
This article is more than 3 years old.

 MacGyver solved it again!

Who or what is a MacGyver, you might wonder?

Well, most people have heard of MacGyver, the TV series and main character that manages to always find a clever means to extricate himself from some puzzling predicament, using his wits to devise a solution out of rather everyday items.

Fans know that he carries a Swiss Army knife, rather than a gun, since he believes that using his creativity and inventiveness will always allow him to deal with any untoward circumstance (the knife is handy when you need to defuse a bomb, or when you need to take apart a toaster and reuse its electronics for a completely different purpose and ultimately save your life accordingly).

Turns out that you don’t necessarily need to have ever seen the show or watched any YouTube clips and yet still nonetheless might know what it signifies to be a “MacGyver” in dealing with a thorny task (it has become part of our lexicon of speaking).

In short, we now consider any kind of innovative solution to be characterized as a MacGyver-like approach, assuming that it is an elegant and somewhat simple solution to a seemingly intractable problem.

Let’s parse that statement.

One crucial element is that the problem itself has to be somewhat bedeviling.

If the problem is straightforward and not filled with complications, you presumably could solve it with just your ordinary noggin and not need to put on a MacGyver-like thinking cap.

Another vital aspect is that the solution cannot be blatantly obvious.

In other words, if a monkey could immediately see how to solve the problem, you don’t need to ratchet up into the stratosphere of problem-solving and instead can just do a grab-and-go to solve the matter at hand.

Okay, so the problem needs to be tough or relatively intractable, and the solution has to be non-obvious and requires a stretch of the imagination to come up with.

What else?

The problem needs to be solvable.

This is important and often not readily knowable at the start of the problem-solving process.

Oftentimes, when a problem is presented to you or emerges, you aren’t exactly sure whether there are any means to solve it.

As such, you might explore a variety of potential solutions, and in so doing discover a whole bunch of potential solutions that aren’t viable to solve the actual problem.

In the case of the MacGyver lore, he always does find a solution, which is heartwarming, but you can’t expect in real life that a solution is always findable.

We can say that it is helpful perhaps to assume that a solution is findable, which can boost your spirits when in the throes of trying to solve a hairy problem and might inspire you.

For those that sometimes give up right away and assume there is no solution, it is as though they have tossed in the towel and aren’t, therefore, going to put in the energy to try and ferret out a solution.

That being said, there is also the real-world aspect that ultimately there might not be a solution (unlike the TV show, which always provides a fairy book happy ending).

An added twist is that maybe there is a solution, yet only time will allow the solution to be realized, and thus you might not immediately be able to solve the predicament, even though you’ve divined how it could be solved.

How could you have discovered a solution and yet there is a lapse in time before you can solve the problem?

An easy example would be a candle that once lit will slowly burn through a rope, and once the rope is cut by the fire, it then releases you from a trap.

In that example, you knew a viable solution, yet it took a while for the solution to be carried out.

Suppose though that you had no evident means of lighting the candle?

This becomes another form of problem, one that is connected to the larger problem of presumably being trapped. It is a “new” problem in that it ties to your solution and might or might not directly be considered part of the originating problem of having been trapped.

Perhaps there’s a box of matches in the other room, which if you could reach it, you’d grab a match to light the candle to then burn through the rope to release you from the trap (reminiscent of a Rube Goldberg arrangement).

Your proposed solution is now stopped in the quest of getting those matches.

Time might unwind and it turns out that the box of matches gets knocked off a table due to a wind gust that comes up, spilling the matches, one of which rolls to into your trapped area.

Anyway, the point being that it is not necessarily the case that you will always be able to perform a MacGyver instantaneously and must allow for time to go forward for a solution to become viable or to emerge (for a TV show, they need to wrap-up the solution on a timely basis since the show only has thirty minutes or an hour, while in real life things might be of much longer duration).

To be a true MacGyver-like scenario, we usually expect that whatever solution is devised will be straightforward and elegant at the same time.

This elegance criteria can be hard to boil down and explain in words. It is one of those things whereby if you see it, you’ll be able to decide whether it was elegant or not (akin to beauty being in the eye of the beholder)..

In the TV show, MacGyver is nearly always faced with a life-or-death predicament, but for most real-world applications of the MacGyver-like approach, you usually aren’t dealing with life-or-death matters. The point being that sometimes the MacGyver is handy for ordinary matters involving difficult problems, while in other instances greater matters might be on the line.

This brings us to an important consideration about how we think as humans, along with how AI systems are being crafted and the limits of what they to-date have been able to achieve. Please be aware that the AI of today is not even close to being anything equivalent to true human intelligence, which might be a shocking point for some to realize, but nonetheless is indeed the case.

Sure, there are pockets of situations whereby an AI application has seemingly been able to perform a task as well as a human might, these are constantly in the news. That though is a far cry from being able to exhibit a full range of intelligence and pass any kind of Turing Test (a famous method in AI to try and ascertain whether an AI system is able to exhibit human intelligence, see my analysis at the link here).

Today’s AI systems tend to be classified as having narrow AI, meaning that they can possibly “solve” a narrow problem, meanwhile such an AI system is not AGI (Artificial General Intelligence) and lacks human qualities such as common-sense reasoning (see link here).

In fact, one significant concern about the rampant use of Machine Learning (ML) and Deep Learning (DL) is that those computationally based patterns matching algorithms tend to be brittle, susceptible to falling out-of-step when faced with exceptions or unusual cases. The odds are that any situation requiring or deploying a MacGyver is by definition bound to be an exceptional or unusual case (otherwise, you’d use some other brute force algorithm or ordinary solving methods).

Here’s an intriguing question to ponder: “Will the advent of AI-based true self-driving cars potentially be stymied by exceptional or unusual circumstances, and if so, could the use of MacGyver-like approaches help overcome those impediments?”

The answer is that yes, so-called edge cases (another term for an exception or unusual instances) are a significant concern about the safety of true self-driving cars, and yes, if AI systems could employ MacGyver-like capabilities it might help to deal with those tough moments.

Let’s unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And MacGyver

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

To date, the efforts to devise self-driving cars have generally consisted of getting the AI to be able to drive in relatively ordinary driving situations.

This makes sense, namely, get the “easier” stuff done first (for clarity, none of this is especially easy), involving having the AI driving system be able to drive in a somewhat calm neighborhood or a relatively everyday freeway traffic setting.

Furthermore, if you are using collected driving data to train an ML/DL system, the odds are that most of the driving data is also primarily about day-to-day driving and bereft of those out-of-the-ordinary driving occasions.

Think about your own driving efforts.

Much of the time, you are driving along, mulling over what you’ll eat for dinner that night or replaying in your mind that difficult conversation you had with your boss the other day, and not seemingly paying rapt attention to the roadway.

On and on this “mindless” driving often occurs.

Then, there are those rarer moments (hopefully rare), when something extraordinary yanks you out of your complacency and you need to immediately respond.

It could be a life-or-death circumstance involving having to size-up in real-time a difficult problem facing you in the traffic setting, and of which you need to assess what your options might be, including having to enact those options soon enough and sufficiently to avoid death or destruction.

All in a whisper of a moment.

Most would concede that today’s AI driving systems are decidedly not yet ready to cope with those moments if the problem isn’t one that the AI driving system hasn’t already “seen” previously or been otherwise pre-programmed to handle.

A novel or surprise situation is not good for AI driving systems, right now, and thus not good for human passengers, nor pedestrians, nor other nearby human-driven cars.

What to do about those edge problems?

The usual answer is to keep pushing along on roadway trials and collecting lots of driving data, and hopefully, eventually, all possible permutations and possibilities of driving situations will have been captured, and then presumably analyzed so they can be dealt with.

One has to be dubious about such an approach.

A leading self-driving car company is Waymo, which has accumulated around 20+ million roadway miles all told, and though it seems at first glance an impressive number, keep in mind that humans drive over 3 trillion miles, per year, and so the odds of finding a needle in a haystack of a lot lesser miles is probabilistically less likely.

Insiders of the self-driving car industry also know that miles are not just miles, no matter which self-driving car company is doing roadway tryouts, meaning that if you drive in the same places over-and-over, those miles are not necessarily as revealing as driving in a more radically changing and variety of roadways and road conditions (this is partially another criticism of the so-called disengagement reports about driverless cars, see my analysis at this link here).

Another offered approach involves doing simulations.

Automakers and self-driving tech firms do tend to use simulations, in addition to driving on roadways, though there is an ongoing debate about whether simulations should be undertaken prior to allowing public roadway use, or whether it is satisfactory to do both at the same time, plus there is debate too about whether simulations are adequate as a substitute for driven miles (once again, it depends upon the type of simulation undertaken and how it is constructed and utilized).

Some believe that AI driving systems ought to have a MacGyver-like component, prepared to tackle those extraordinary problems that arise while driving.

It would not especially be based on prior oddball or edge case situations, and instead, be a generalized component that could be invoked when the rest of the AI driving system has been unable to resolve a playing out circumstance.

In some manner of speaking, it would be like AGI but specifically in the domain of driving cars.

Is that even possible?

Some argue that AGI is either AGI or it is not, thus, trying to suggest that you might construct an AGI for a specific domain is counter to the AGI notion overall.

Others argue that when seated in a car, a human driver is making use of AGI in the domain of driving a car, not solving world hunger or having to deal with just any problems, and thus we ought to be able to focus attention to an AGI for the driving domain alone.

Hey, maybe we should apply MacGyver to the problem of solving edge cases and find an elegant solution to doing so, which might or might not consist of employing a MacGyver into the AI of the driving system.

That’s a mind twister, for sure.

Conclusion

A handy paper on the AI challenges in dealing with MacGyver-like problems was written by researchers Sarahthy and Scheutz at Tufts University (here’s the link). The authors point out that an AI system would likely need to be able to undertake numerous arduous tasks and sub-tasks in performing any MacGyver-like situation, including being able to do impasse detection, domain transformation, problem restructuring, experimentation, discovery detection, domain extension, and so on.

Essentially, it is a very hard problem to get an AI system to act like MacGyver, regardless of whether there is a Swiss Army knife available or not.

In the case of an AI driving system, realize too that the MacGyver component would need to act in real-time, having only split seconds to ascertain what action to take.

Plus, the actions taken are most likely linked to life-or-death consequences, including too the qualms associated with the Trolley problem (this involves having to make choices between which deaths or injuries to incur versus other sets of deaths or injuries, see my explanation at the link here).

If you say that we ought not to seek a MacGyver-like capability, it raises the obvious question as to what alternatives do we have, and meanwhile, self-driving cars are proceeding along, absent such an ingenious or even similar capacity.

There is also the belief that if we could do MacGyver for the AI driving domain, we might be able to start stretching it to other domains, allowing a step-wise achievement of an AGI across all domains, though that’s a quite argumentative contention and a story for another day.

MacGyver is known for saying that you can do anything you wanna do if you put your mind to it.

Can we get AI to do anything we want it to do if we put our minds to it?

Time will tell.

Follow me on Twitter