The rapid development of artificial intelligence (AI) holds great promise, but also potential for pitfalls. AI can change the way we live, work, and play, accelerate drug discoveries, and drive edge computing and autonomous systems. It also has the potential to transform global politics, economies, and cultures in such profound ways that the U.S. and other countries are set to enter what some speculate may be the next Space Race.  

We are just beginning to understand the implications of unchecked AI. Recent headlines have highlighted its limitations and the continued need for human control. We will not be able to ignore the range of ethical risks posed by issues of privacy, transparency, safety, control, and bias. Examples include:

Considering the advances already made in AI—and those yet to be made—AI is undoubtedly on a trajectory toward integration into every aspect of our lives. As we prepare to turn an increasing share of tasks and decision-making over to AI we must think more critically about how ethics factor into AI design to minimize risk. With this in mind, policymakers must proactively consider ways to incorporate ethics into AI practices and design incentives that promote innovation while ensuring AI operates with our best interests in mind.

In an AI World, the Garbage In/Garbage Out Principle is Amplified

With so much at stake, it’s tempting to ponder whether ethical issues can be solved by simple algorithm changes. Why can’t AI technologists team up with philosophers or ethicists and immediately change the calculus to infuse morality into the very algorithms on which AI is based? It’s not that simple. The definition of exactly what is fair and equitable, and in what context, does not have a clear-cut answer. Beyond the murky definitions and lack of common consensus, the issue is not only in the algorithms themselves—it’s in the data used to train them.

Perhaps most consequential is the intentional and unintentional bias embedded in the datasets with which AI is trained. Bias in AI can take many forms: dataset bias, association bias, interaction bias, automation bias, and confirmation bias (Cou et al., 2017). It can be the result of simple mistakes or oversight in data aggregation techniques. Computers then use these potentially flawed data to make assumptions and calculations that are not truly objective. This alone may not seem overtly dangerous, but the real issue lies in the potential of these biases or discrepancies to be scaled to such a degree that they affect how computers treat larger sets of data. Biased data causes machine learning to rely on unjustified bias to discriminate against groups at scale (Crawford, 2017).  

AI technologies already in the market are displaying these intentional and unintentional biases. For example, consider talent search technology that intentionally groups candidate resumes by demographic characteristics, or insensitive autofill search algorithms (Lapowsky, 2018). Conversely, by reinforcing European notions of beauty, the Beauty.AI pageant showed unintentional automation bias in action when machines overwhelmingly chose winners with light skin rather than any of the large number of dark-skinned applicants (Levin, 2016).

As a nation, we need to understand that the implications of AI reach far beyond beauty contests or advertising targets. AI has the ability to create almost undetectable forgeries of news articles, audio recordings, and videos, making it difficult to separate truth from fiction. Organizations and governments will be held accountable for their actions, or lack thereof, in protecting citizens’ rights to privacy, equity, and justice.

Government’s Role in Safe and Responsible AI Implementation

While AI cannot be expected to solve philosophical conundrums that have existed across society for years, government leaders and policymakers are uniquely positioned to lead and fund long-term R&D on topics like ethics and bias where businesses may not be incentivized to invest. They must also introduce smart policy that balances the need for innovation with the obligation to benefit and safeguard society. Such mitigating measures should be built with inclusion and fairness in mind, and include these two fundamental steps:

  1. Establish Standards and Guidance to Navigate Risks. The government can help confront AI risks by leading the joint development and publication of standards and guidance around safety, privacy, ethics, and control in coordination with industry, academic, and international stakeholders. Equally important is continued funding of research focused on testing and certification methods, including programs like DARPA’s Explainable AI, which aims to enable humans to understand how AI algorithms behave. The government should also focus on establishing clear plans and expectations for organizations on how to deal with failures when they do arise—because they most certainly will.  

  2. Devote Public Resources to the Creation and Curation of Accurate and Bias-Free Data Sets. One line of defense against AI systems that inflict unfair treatment is to give more attention to how data sets are constructed before operationalizing them, which means that attention to bias cannot be an afterthought. Today, many of the most powerful data sources come from two places: (1) academic institutions, and (2) corporations. Both types of data sets have limitations. Academia’s data sets are often constrained based on the time and resources of those using the data to drive publications, while corporate data is often representative of individual customer sets rather than the public as a whole. The government should release guidance around data-sharing between public and private organizations to ensure data integrity by allowing for the curation of more diverse data sets, while still adhering to the aforementioned standards and guidance.

By leading the dialogue now and beginning to take proactive actions to address these topics, the government can ensure the development of AI with a net positive impact on society, rather than hindering potential opportunities simply because ethics wasn’t considered at the onset.

What are other challenges and solutions for AI in government and beyond? We want to hear from you.  Visit www.boozallen.com/ai to learn more.