Want a True Bionic Limb? Good Luck Without Machine Learning

Conscious thought is holding back the cyborg revolution. Which is why the next generation of prosthetics will use computer vision and neural nets instead.
A prototype hand with the 99p camera attached.
This prosthetic hand can "see" with its attached 99p camera.Newcastle University

In 2006, the Defense Advanced Research Projects Agency vowed to build, within four years, a brain-controlled prosthetic arm indistinguishable from the real thing. Yet after hundreds of millions of dollars and more than a decade of engineering, most limb replacements (even those wired straight to the noggin) struggle to mimic human gestures. Cracking the neural code for movement was trickier than expected.

The trouble lies in getting past conscious thought. “Capturing the body’s innate ability to just know what to do is something really lacking from all prosthetics today,” says Mike McLoughlin, who manages the prosthetics program at Johns Hopkins. Any inputs the arm receives—whether from the sparks of nerves left over on a stump, or directly from motor neurons in the brain—require explicit instructions, which humans are bad at doling out. So the latest attempts at a true bionic arm simply avoid the problem of intention by using machine learning and computer vision to make decisions.

Imagine you’re learning to play the guitar. Every time you attempt a new chord, you must think about where to place your fingers. And you must do that over and over, consciously placing them in space, thinking about timing and pressure. Once you’ve developed that muscle memory, your brain simply says “play a C chord” and you do it. You can even do it without looking. It doesn’t work that way for people with prosthetic limbs.

Related Stories

“Reaching out to grab something is never going to be a subconscious reaction,” says McLoughlin. “You always have to look, you always have to think.” Which explains why truly bionic limbs remain elusive. Some of the most advanced neurally controlled prosthetics, like those developed at Johns Hopkins Applied Physics Laboratory with Darpa money, receive inputs from an electrode array on the motor cortex, which controls voluntary movement. But that approach does not provide the level of control over how limbs minutely adjust and react that researchers previously thought it might. And if your brain can’t capture muscle memory, it can’t convey a learned response to an artificial arm. The arm must learn on its own.

This is where computer vision and deep neural networks come in. Machines are far better than humans in making the complicated calculations of distance, speed, force, and shape underpinning motor muscle memory. So researchers are training artificial limbs to make decisions once left to the people using them—things like how quickly to accelerate toward a cup of coffee, and what kind of grip to make to pick it up.

In a study released Wednesday in Journal of Neural Engineering, British scientists used more than 500 images of graspable objects, each categorized into one of four possible grips, to train an artificial vision system. Then they mounted a camera on a prosthetic arm, which used computer vision algorithms to “see” and grab the object 80 percent of the time without its users physically stimulating their muscles to adjust the grip. The researchers used a myoelectric arm, which works by placing electrodes on the skin to detect the nerve signals an algorithm translates into instructions for the arm. Myoelectric prostheses are the type widely available to 100,000 or upper limb amputees in the US.

Neural control is not yet ready for prime time, but computer vision is already making existing artificial arms significantly more agile—up to 10 times faster than anything on the market. There are still limitations of course. Grip dimensions aren’t automatically adjusted for size, for example, meaning such a hand would make the same motion for an 8-ounce latte and a 24-oz drip coffee, requiring the user to make fine adjustments. Such an arm also struggles as the object moves further away, because it “sees” them as smaller than they actually are.  But bioelectronic engineer Kianoush Nazarpour, who led the study, says that writing software to handle such challenges seems relatively straightforward. “The system could easily become more intelligent,” he says.

<a href="https://www.wired.com/video/9998/12/a-true-bionic-limb-remains-far-out-of-reach/" class="clearfix no-hover"> <img width="480" height="270" src="https://www.wired.com/wp-content/uploads/9998/12/wired_a-true-bionic-limb-remains-far-out-of-reach.jpg" class="attachment-600-338-full size-600-338-full wp-post-image" alt="wired_a-true-bionic-limb-remains-far-out-of-reach.jpg"> Related Video A True Bionic Limb Remains Far Out of Reach

It has to if science is to create a truly bionic arm. A growing number of researchers at places like Johns Hopkins, the University of Pittsburgh, and Carnegie Mellon are combining vision learning with brain-computer interfaces. The idea is to let users make intuitive decisions and lead the prosthetic to the general area, then allow the machine within the machine take over. The computer can answer niggling questions like “Where is the object? How far am I from the object? How wide must I open the hand to grasp it? How much force is required to lift it?” That more closely mimics how the body works anyway. After all, the brain says, “grab that,” not “rotate your wrist 46 degrees and pinch downward with .38 newtons of force.”

But that’s also why it’s going to be tricky to figure out. “Seamlessly integrating neural control with automatic control is more art than science,” says Steven Chase, a neural engineer at Carnegie Mellon. “It’s going to be a lot of trial and error.” Which means making a fully bionic arm will require more machines, and more brains, too.