Step inside the MIT lab designing new human-computer interfaces

Suranga Nanayakkara wants to humanise devices for the blind and deaf. And he hopes his breakthroughs will have a profound impact on how we all interact with technology

Suranga Nanayakkara slips a black ring onto his finger and points. He’s showing it off, but not because it’s a fancy wedding band. This ring, he explains, helps visually impaired people read by converting text into speech.

Nanayakkara points at a poster on the wall more than a metre away, clicks a small button on the side of the ring, and almost instantaneously a female voice starts reading out the poster’s header through the headphones he’s wearing. Such optical character recognition technology, or OCR, already exists but is often locked inside clunky highlighter-style devices that are slow and cumbersome. This ring, Nanayakkara explains, is a little different.

The Finger Reader lets people read only what they’re pointing at, promising a relatively fuss-free experience, especially when out and about. You take an image with the camera and the software crops just one section of it to get the information you want and. “It’s useful if I’m in a restaurant and I want to order something off the menu,” says Nanayakkara, a computer scientist and trained engineer at the Singapore University of Technology and Design. “Or if I’m in the library and want to pick the right book or the shelf, or to see what currency notes I have in my wallet.”

Nanayakkara, 36, first began working on the Finger Reader five years ago while doing a post-doc at MIT. Back then, the device was chunky, roughly the size of two matchboxes stacked on top of one another. The technology improved over the years, but the aesthetics and functionality remained questionable — last year’s Finger Reader, although now eraser-size, had a thick wire running from its base to a processing device worn on the wrist like a watch.

“The blind people we tested it on brutally said they wouldn’t use it. They said, ‘I cannot wash my hands with this, I can’t eat my lunch with this, it’s so cumbersome,’” he recalls. The feedback was simple: it has to be a device people can wear and forget all about. To free up the user’s hands, Nanayakkara and his team moved the processing device into a pair of bluetooth-enabled, bone conduction headphones. They shifted the camera to sit with the processor, so the pointer could shrink down in size. The result: a sleek, inconspicuous-looking ring.

The latest version of Finger Reader is now ready for testing. Thirty visually impaired people from Singapore and Sri Lanka, where Nanayakkara is originally from, began a six-month trial late last month. They will wear the device every day, and the researchers will track what kind of images they capture, how quickly the Finger Reader responds, and how many tries it takes for the correct text to be read, among other things.

The device is just one of a number of new technologies Nanayakkara is developing in his Augmented Human Lab. He established the Singapore-based lab in 2011, with a vision of humanising technology by making it more intuitive. “Technology can do beautiful things,” he says, “but a collection of smart devices may not make you smarter. There seems to be a gap between what technology has to offer and what we are naturally able to do – our lab is focusing on how we can bring this gap together.”

A lot of Nanayakkara’s projects, like the Finger Reader, are targeted at helping people with disabilities. There’s Swimsight, a lights-based starting system that changes colour to signal the start of a race to deaf swimmers. And Knoctify, a commercially available doorbell that translates knocking into lights and vibrations for the hard of hearing. It’s an interest that was sparked as a teenager when Nanayakkara began volunteering regularly at a school for deaf children that his aunt ran close to the town of Piliyandala where he grew up, roughly 20 kilometers from the Sri Lankan capital Colombo. Nanayakkara’s first project was to create SoundFloor, a vibrating wooden platform to help the children in his aunt’s school better understand rhythm and music. It’s something that still gets used every day.

Nanayakkara inherited his love for tinkering with objects and creating new things from his mother Manel. An electronics engineer, Manel liked to keep her hands busy with various fix-it-up projects around the house. Summers in Sri Lanka often signalled sweltering heat and droughts, disrupting the hydroelectric-generated power. To circumvent this and allow her children to keep studying, Manel rewired the lights in their house so that they would connect to a car battery when the power went out.

Nanayakkara’s favourite memory, however, is of his mother’s homemade firecrackers. “We had these firecrackers that shot up like rockets, but in Sri Lanka, it’s done in a very raw way so it’s dangerous to hold them and tie them up,” he says. “So my mum made this little rocket launcher that has a filament. When you press a button, it heats up automatically and fires the thing.”

It’s this let’s-make-something-better mindset that has guided Nanayakkara from his Piliyandala days to where he is today – one of world’s leading experts in creating human-computer interfaces. Never one to shy away from embracing new opportunities, Nanayakkara turned down a full scholarship to Sri Lanka’s top university that was walking distance from his home. Instead, he moved to Singapore for his undergraduate degree despite having limited funds and barely being able to speak English. He taught himself the language by recording his lectures and using a dictionary for translation. Nanayakkara has never looked back.

The end of January signals another life-changing change for Nanayakkara, when he moves his lab to the University of Auckland, New Zealand. Once there, Nanayakkara and his team keep working on the Finger Reader and other technologies that strive to provide “assistive augmentation” to the deaf and blind. “I want to continue to push, to think about how we can interact with computers in a new way.”

This article was originally published by WIRED UK