Don't turn a blind eye to robot bias

When a robot looks at you, what will it see? Will it see an ambitious, competent professional staring back? Or will it reduce you to a few traits such as your race or gender? Remember, it will likely use its initial impression of you to make a decision. Will it approve your loan? Reject your job application? Charge your neighbor less for car insurance? Will it see you, or something else?

These questions are increasingly relevant. My colleagues Jennifer Allyn, Jon Terry, Bhushan Sethi, and I recently hosted a roundtable discussion on diversity and inclusion (D&I). We spoke with executives from nine financial institutions, including professionals in data and analytics, diversity and inclusion, and human resources. Among other things, we discussed the effects that artificial intelligence (AI) could have on D&I, and its potential to make business better for everyone.

I emphasize “potential” because, if we’re not careful, it may not work out that way.

Unconscious bias is the idea that you may treat someone differently, or make assumptions about them, based on a trait they have, or that you think they have, without even knowing that you’ve done it. Biases can stem from an array of traits: how old a person is, what they look like, or whether or not they’re married or have children. And the list goes on. Inevitably, biases based on these traits appear in the workplace and, when not effectively controlled, can influence workplace decisions. In building a diverse and inclusive firm, many organizations try to root out unconscious bias to create an equitable and supportive atmosphere that encourages diversity of thought. If unconscious biases (sometimes called “blind spots”) are allowed to run loose, it can be difficult to achieve these goals.

So where does AI come in? AI, based on logic and algorithms, doesn’t have emotions or anecdotal experiences clouding its judgment. Its decision making comes from programmed computer code processing raw data. It won’t know a person’s gender or age unless those are relevant data points being fed into it, as with an actuarial computation. AI’s not going to make a snap judgment based on someone’s hairdo or clothing. It has real potential to help us address unconscious bias.That is, assuming that AI doesn’t have any built-in biases of its own.

How can AI develop bias? Well, AI is only as good as the work that’s put into it: the coding, the data sets, and the testing of algorithms. Organizations often have their own blind spots in building development teams who oversee all of that work. If those teams aren’t diverse enough and the testing not rigorous enough, blind spots can creep into the code. In fact, 76% of respondents in PwC’s 2017 CEO Pulse Survey told us that AI has a potential for bias.

Here’s one example. If you haven’t seen it, check out Joy Buolamwini’s 2016 TED Talk, in which she talks about her time as an undergraduate computer science major. When asked to develop a social robot capable of playing peek-a-boo, she used generic facial recognition software. After discovering that the software couldn’t recognize her face because of her dark skin tone, she was forced to make do with her (Caucasian) roommate’s face.

Buolamwini is now a researcher at MIT’s Media Lab, where she recently co-authored a study on gender and skin-type bias in commercial AI systems (also found on mit.edu). She and her colleagues found that programs were at least 99% accurate in determining the gender of light-skinned men. But, when presented with darker-skinned women, two popular programs were wrong one third (34%) of the time. Algorithms that don’t get the basics right can have serious consequences. After all, what if they’re influencing loan decisions, insurance policy prices, hiring decisions, and more?

So how can organizations combat algorithmic bias? Interestingly enough, it starts with more D&I. Our roundtable participants agreed that staffing development teams with talented people from diverse backgrounds is one of the most effective ways to help curb bias. But that’s not all. Organizations should also make sure they extensively test their algorithms so that any bias that slips past the development teams can be weeded out in the testing stage. Firms also need clear governance practices in place for robust monitoring and transparency. Based on conversations with our clients, at least one major US bank has already implemented governance and controls to detect potential bias that may creep into enterprise models.

There’s no doubt that D&I efforts require real buy-in from an entire organization. These days, that means thinking about software and digital labor as well as training your staff to “live” your corporate values. Fortunately, leaders are getting it, with 87% of participants in PwC’s Global D&I Benchmarking Survey claiming D&I is a stated priority. We’re starting to see firms hold themselves accountable for their algorithms, or at least starting to identify and control the risks of AI. And they recognize that while algorithms have the potential to reduce unconscious bias and improve a firm’s D&I performance, it’s on the humans to make it happen.

Stefanie is a director at PwC and advises financial services clients on issues related to HR and workforce strategy. If you’d like to discuss these issues, please reach out.

Hector Aguilera

CEO, CCO, Business Unit Director, Commercial Director, New Business Director, Country Manager

3y

Stefanie Coleman we at illumr can solve the AI bias problem right where it is originated: data!. Using fair adversarial networks, our product Rosa can help organizations “cure” its datasets from discriminating any protected characteristic, working along with any type of algorithm.

Julia Lamm

Workforce Transformation Partner at PwC

5y

Stef - great article, I love the Algorithmic Justice League, it is daunting how quickly bias in robots can spread. Thanks for raising our awareness - I will definitely keep these in mind.

To view or add a comment, sign in

Insights from the community

Explore topics