Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Why AI Leads Us to Think Less, Act Impulsively

'We need to be actively engaged in questioning what the algorithms do, what the results mean, and how inherent bias in the training set can affect the results,' says MIT Professor Bernhardt Trout. But it doesn't always work out that way.

December 13, 2019
Why Artificial Intelligence Is Ethically Challenged

Since MIT Professor Bernhardt Trout's engineering ethics course shifted to focus on the ethics of artificial intelligence, the class has ballooned from a handful of students per semester in 2009 to more than 150 this year.

Opinions As deep learning and neural networks take center stage, "the students have much more of a concern about AI...particularly over the last year or so," Trout says.

A key challenge, according to Trout, is that "these algorithms push us toward us thinking less and acting based on impressions that may or may not be correct, as opposed to [making] our own decisions in a fully informed way. In general, we want to have the answer and move on. And these algorithms tend to play off on that psychology."

As AI evolves, "we need to be actively engaged in questioning what the algorithms do, what the results mean, and how inherent bias in the training set can affect the results," Trout says.

There are many ways this blind faith in algorithms can have adverse effects. For instance, when you start to believe (and "like") everything you see in your Facebook News Feed, which is powered by AI algorithms, you'll end up seeing only articles that confirm your viewpoints and biases, and you could become less tolerant of opposing views.

On other online platforms, content-recommendation algorithms can shape your preferences and nudge you in specific directions without your conscious knowledge. And in fields such as banking and criminal justice, blind trust in algorithms can be more damaging, such as the unwarranted decline of a loan application or an unfair verdict passed against a defendant.

"We have to remember that these are all mathematical algorithms. And there's a good argument against thinking that everything in human life is reducible to mathematics," Trout warns.

Why Did AI Do That? Who Knows!

One of the major challenges of contemporary AI is lack of explainability. Deep-learning algorithms develop their logic from data and work in very complicated ways that are often opaque even to their creators. And this can cause serious trouble, especially where ethical issues are involved.

"It has become harder to trace decisions and analysis with methods like deep learning and neural nets," says Element AI's Marc-Etienne Ouimett. "The ability to know when a decision has been made or informed by an AI system, or to explain or interpret the logic behind that decision, becomes increasingly important in this context. You cannot effectively seek redress for harm caused by the misuse of an AI system unless you know that one has been used, or how it influenced the outcome."

This lack of transparency also makes it difficult to spot and fix ethical issues in algorithms. For instance, in one case, an AI algorithm designed to predict recidivism had silently used ZIP codes as a determining factor for the likelihood that a defendant would re-offend and wound up with a bias against black defendants, even though the programmers had removed racial information from their datasets.

In another case, a hiring algorithm penalized applicants whose resumes included the term "women," as in women's sports. More recently, Apple's new credit card was found to be biased against women, offering them up to 20 times less credit than men—because of the AI algorithms it uses.

In these cases, the developers had gone to great lengths to remove any characteristics from the data that would cause bias in the algorithms. But AI often finds intricate correlations that indirectly allude to things like gender and race. And without any way to investigate those algorithms, finding these problematic correlations becomes a challenge.

Thankfully, efforts to create explainable AI models are taking place, including an ambitious project by DARPA, the research arm of the Department of Defense.

Silicon Valley: Concerned, But Loving That ROI

Another factor in the increased interest in the ethics of AI is the active engagement of the commercial sector.

"While the growth of deep learning and neural networks is a part of the growing attention toward ethical AI, another major contributor is...leaders in tech raising the issue and trying to actively make their points of view known to the broader public," Professor Trout says.

Execs like Bill Gates and Elon Musk, as well as scientists such as Stuart Russell and the late Stephen Hawking, have issued warnings about the potentially scary unintended consequences of AI. And tech giants like Microsoft and Google have been forced to explain their approach to AI and develop ethical guidelines, particularly as it relates to selling their technology to government agencies.

"Ethical principles are a good start, but operationalizing these across the company is what counts. Each team, from fundamental/applied research to product design, development, and deployment, must understand how these principles apply to their functions," Element AI's Ouimett says.

Ouimett also underlines the need for companies to work with lawmakers actively. "It's important for businesses that have the technical expertise to engage in good faith with regulators to help them understand the nature of the risks posed by the technology," he says.

Element AI recently partnered with The Rockefeller and Mozilla Foundations to produce a list of recommendations for governments and companies on the role of the human-rights framework in AI governance.

"The collaboration will focus on advancing research on the legal, governance, and technical components of data trusts, which both Element AI and the Mozilla Foundation believe have tremendous potential—as safe and ethical data-sharing mechanisms, as many governments have thus far conceived of them, but also as tools that could be used to empower the public to participate in decisions regarding the use of their personal data, and to collectively seek redress in cases of harm," Ouimett says.

But Professor Trout has a slightly different view on the involvement of tech companies in AI ethics. "At the end of the day, they're doing this to a large extent for commercial reasons. They want to make their employees happy. That was the reason Google decided not to work with the Department of Defense. And they want to make their customers and the government happy—and they want to enhance their bottom line," he says.

"I have not seen these companies really promote a thoughtful, deep approach to ethics, and that's where I would find them fall short. They have resources, they would be able to, but I don't see that happening. And I think that's a pity."

This company wants to illuminate our skies with artificial shooting stars
PCMag Logo This company wants to illuminate our skies with artificial shooting stars

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Ben Dickson

Ben Dickson

Ben Dickson is a software engineer and tech blogger. He writes about disruptive tech trends including artificial intelligence, virtual and augmented reality, blockchain, Internet of Things, and cybersecurity. Ben also runs the blog TechTalks. Follow him on Twitter and Facebook.

Read Ben's full bio

Read the latest from Ben Dickson