AI desperately needs regulation and public accountability, experts say

Artificial intelligence systems and creators are in dire need of direct intervention by governments and human rights watchdogs, according to a new report from researchers at Google, Microsoft and others at AI Now. Surprisingly, it looks like the tech industry just isn’t that good at regulating itself.

In the 40-page report (PDF) published this week, the New York University-based organization (with Microsoft Research and Google-associated members) shows that AI-based tools have been deployed with little regard for potential ill effects or even documentation of good ones. While this would be one thing if it was happening in controlled trials here and there, instead these untested, undocumented AI systems are being put to work in places where they can deeply affect thousands or millions of people.

I won’t go into the examples here, but think border patrol, entire school districts and police departments, and so on. These systems are causing real harm, and not only are there no systems in place to stop them, but few to even track and quantify that harm.

“The frameworks presently governing AI are not capable of ensuring accountability,” the researchers write in the paper. “As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.”

Right now companies are creating AI-based solutions to everything from grading students to assessing immigrants for criminality. And the companies creating these programs are bound by little more than a few ethical statements they decided on themselves.

Google, for instance, recently made a big deal about setting some “AI principles” after that uproar about its work for the Defense Department. It said its AI tools would be socially beneficial, accountable and won’t contravene widely accepted principles human rights.

Naturally, it turned out the company has the whole time been working on a prototype censored search engine for China. Great job!

So now we know exactly how far that company can be trusted to set its own boundaries. We may as well assume that’s the case for the likes of Facebook, which is using AI-based tools to moderate; Amazon, which is openly pursuing AI for surveillance purposes; and Microsoft, which yesterday published a good piece on AI ethics — but as good as its intentions seem to be, a “code of ethics” is nothing but promises a company is free to break at any time.

The AI Now report has a number of recommendations, which I’ve summarized below but really are worth reading in their entirety. It’s quite readable and a good review, as well as smart analysis.

  • Regulation is desperately needed. But a “national AI safety body” or something like that is impractical. Instead, AI experts within industries like health or transportation should be looking at modernizing domain-specific rules to include provisions limiting and defining the role of machine learning tools. We don’t need a Department of AI, but the FAA should be ready to assess the legality of, say, a machine learning-assisted air traffic control system.
  • Facial recognition, in particular questionable applications of it like emotion and criminality detection, need to be closely examined and subjected to the kind of restrictions as are false advertising and fraudulent medicine.
  • Public accountability and documentation need to be the rule, including a system’s internal operations, from data sets to decision-making processes. These are necessary not just for basic auditing and justification for using a given system, but for legal purposes should such a decision be challenged by a person that system has classified or affected. Companies need to swallow their pride and document these things even if they’d rather keep them as trade secrets — which seems to me the biggest ask in the report.
  • More funding and more precedents need to be established in the process of AI accountability; it’s not enough for the ACLU to write a post about a municipal “automated decision-making system” that deprives certain classes of people of their rights. These things need to be taken to court and the people affected need mechanisms of feedback.
  • The entire industry of AI needs to escape its engineering and computer science cradle — the new tools and capabilities cut across boundaries and disciplines and should be considered in research not just by the technical side. “Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations,” write the researchers.

They’re good recommendations, but not the kind that can be made on short notice, so expect 2019 to be another morass of missteps and misrepresentations. And as usual, never trust what a company says, only what it does — and even then, don’t trust it to say what it does.