CHI Part 2: Creating technologies that enhance lives in the areas of health and eye tracking

Published

By Justin Cranshaw (opens in new tab), Researcher, Microsoft Research

CHI 2017

With its focus on the relationships between people and computation, the field of human-computer interaction (HCI) excels at creating technologies that marry human skills with computational functionality, to enable outcomes neither humans nor computers could achieve by themselves. This year at the ACM CHI Conference on Human Factors in Computing Systems (opens in new tab), Microsoft is presenting several ground-breaking works showcasing how humans and machines can work better together.

Building a healthier future

Microsoft Research Podcast

AI Frontiers: AI for health and the future of research with Peter Lee

Peter Lee, head of Microsoft Research, and Ashley Llorens, AI scientist and engineer, discuss the future of AI research and the potential for GPT-4 as a medical copilot.

Recent advances in sensing, machine learning, and personal device design offer a promising vision for enhancing people’s lives in the most fundamental ways—by improving their health and well-being. HCI researchers at Microsoft have been exploring ways computers and people—healthcare providers and patients alike—can each work together in complementary ways to build a healthier future.

Multiple sclerosis (MS) is a debilitating neurological disease that afflicts millions of people worldwide. Because attacks can occur at unpredictable intervals over a long period, it is important for treatment regimens to include regular monitoring to track the progression of the disease. Assess MS (opens in new tab) is a system that uses the computer vision functionality of Microsoft’s Kinect (opens in new tab) to aid medical professionals in assessing the disease progression in MS. In a paper appearing at CHI; Assessing Multiple Sclerosis with Kinect: Designing Computer Vision Systems for Real-world Use (opens in new tab), researchers, Cecily Morrison (opens in new tab), Kit Huckvale, Robert Corish, Jonas F Dorn, Peter Kontschieder, Kenton O’Hara (opens in new tab), the Assess MS Team (opens in new tab), Antonio Criminisi (opens in new tab), and Abigail Sellen (opens in new tab) cover the design process and resulting insights of a prototype system. This system enables a health professional and a computer vision system to work together on MS monitoring, using the natural strengths of the human and computer.

With the recent rise of technologies that gather and track personal healthcare data, it’s important to design to how patients and healthcare providers actually engage with these technologies. As reported in their paper—Self-tracking for Mental Wellness: Understanding Expert Perspectives and Student Experiences (opens in new tab)Bongshin Lee (opens in new tab), Microsoft senior researcher, along with Lauren Wilcox, assistant professor, and student Christina Kelley, both from Georgia Institute of Technology, conducted research to better understand how personal data about one’s mood, physical health, and social activities—collected on devices such as smartphones and activity trackers—can be used to help students manage stress and improve their mental wellness. By focusing on stress, anxiety and depression, these studies reveal how both students and health professionals prefer to engage with data and devices. This work will inform the design of applications, wearable devices, and reporting systems that help clinicians better understand when students might be struggling.

In addition to these health-related works at the main conference, several Microsoft authors are also contributing to the Symposium on Computing and Mental Health (opens in new tab), which brings together researchers from multiple disciplines who seek to use digital solutions to help those in mental distress. One such work, Machine learning for Precise Targeting of a Mobile Dialectical Behavior Therapy Skills Training Application by Mary Czerwinski (opens in new tab), Ran Gilad-Bachrach (opens in new tab), Daniel McDuff (opens in new tab), Kael Rowan (opens in new tab), and Ann Paradiso (opens in new tab), develops and evaluates a mobile application prototype that introduces skills related to dialectical behavior therapy (DBT), a common treatment for suicidal tendencies and borderline personality disorder. This work weaves together interactions with a chat-bot (“eMarsha”) with videos of Dr. Marsha Linehan, the creator of DBT, discussing therapeutic concepts and skills.  With this work, the authors explore new ground in using artificial intelligence and language understanding technologies to extend the reaches of the therapist.

The eyes have it: advancements in human gaze tracking

Just as eye contact is an important social-input signal in person-to-person interactions, for many years, computer science researchers have been training computers to detect where our eyes are focusing on to create new applications and experiences where humans and computers can work better in unison. Microsoft Research has made incredible progress during the last few years in advancing fundamental technologies that can accurately detect the focal-point of human gaze.

Looking Coordinated: Bidirectional Gaze Mechanisms for Collaborative Interaction with Virtual Characters (opens in new tab), by Sean Andrist (opens in new tab), Michael Gleicher, and Bilge Mutlu, received an honorable mention at CHI this year. They report on the design and evaluation of technologies for sharing social gaze cues in human-to-agent interactions in virtual reality (VR). Their work on bidirectional gaze mechanisms—the coordinated production and detection of social gaze cues—is based on a model synthesizing data collected in several human-to-human interactions.

These nonverbal cues can also help us coordinate with one another remotely. Researchers at Microsoft recently developed eye tracking technology to help remotely situated software engineers work together. In their paper: Improving Communication Between Pair Programmers Using Shared Gaze Awareness (opens in new tab), Sarah D’Angelo and Andrew Begel (opens in new tab) designed a new visualization tool that allowed a pair of software engineers to see where in the code their partner was looking. In a study of the technology, remotely situated programmers working together to edit a program could talk more efficiently about the changes they wanted, and reported that knowing where their partner was looking made communicating easier.

Gaze tracking has also opened new possibilities for accessibility research. This year at CHI, Microsoft is presenting several projects (opens in new tab) related to accessibility, including a series of four papers (see below) around improving eye-gaze operated augmentative and alternative communications (AAC) technologies for people with severe motor and speech disabilities. These papers explore issues such as how to adapt interfaces to noise inherent in gaze input, how to dynamically adjust dwell-based eye typing algorithms to reduce text-entry errors, and how to augment AAC devices with secondary “awareness displays” to add backchannel content that enhances the richness of communication through low-bandwidth technologies.

In one of these works, the authors explore how to use smartphone cameras for simple eye-gesture-based text input. To mitigate the drawbacks of current approaches, we created Simple Smartphone Eyegaze Communication, an eye gesture communication system that runs on a smartphone, and is designed to be low cost, robust, portable, and easy to learn, with a high communication bandwidth. GazeSpeak can interpret eye gestures in real time, decode these gestures into predicted utterances, and facilitate communication, with different user interfaces for speakers and interpreters.

Papers:

  1. Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design (opens in new tab)” by Anna Maria Feit, Shane Williams, Arturo Toledo, Ann Paradiso, Harish Kulkarni, Shaun Kane, Meredith Morris. Honorable Mention.
  2. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times (opens in new tab)” by Martez E Mott, Shane Williams, Jacob O. Wobbrock, Meredith R. Morris.
  3. Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities (opens in new tab)” by Xiaoyi Zhang, Harish Kulkarni, Meredith Ringel Morris.
  4. Exploring the Design Space of AAC Awareness Displays (opens in new tab)” by Kiley Sobel, Alexander Fiannaca, Jon Campbell, Harish Kulkarni, Ann Paradiso, Ed Cutrell, Meredith Morris. Honorable Mention.

These areas, along with the post last week about “Interacting through touch (opens in new tab)”, are just a sample of what we’re up to in HCI at Microsoft. For those interested in learning more about the full range of HCI projects we’re working on, our group page (opens in new tab) is a great starting point.

Related:

Related publications

Continue reading

See all blog posts