The Ethical Impact of AI on Society

Last week I shared my experiences of the AISB hackathon. Another part of the conference that was open to members of the public was a panel discussion on The Ethical Impact of AI on Society. A quick look at the event page made it immediately obvious the wealth of knowledge and experience that would be answering questions:

  • Alan Winfield, a Professor of Robot Ethics at the University of West of England
  • Mandy Chessel - IBM Distinguished Engineer, Master Inventor and Fellow of Academy of Engineering
  • Nello Christianini - Professor of AI at University of Bristol
  • Danit Gal - Chair of the IEEE Global Initiative for Ethical Considerations in AI and AS Outreach Committee, and external consultant to Tencent on AI Ethics
  • Björn W. Schuller - Professor of Intelligent Systems at University of Passau.

Alan Winfield hosted the panel and presented the questions. At the end there was an opportunity for the audience to ask their own questions.

I started off writing down the more noteworthy points for consideration later. I found myself writing notes for most of 90 minute panel. There truly was a fountain of knowledge there and I’ll do my best to recount the questions asked and the themes of the answers. I do apologise that I’m not always able to attribute the original speaker. You will also notice that the responses to naturally drift from the initial question posed as the discussion flows.

And so we start off with the first question…

When did you first come across AI ethics and what have you learnt since?

Björn shared his research which involved supporting autistic children recognise and validate their emotions. The system recognised and then showed children their own emotions to aid emotional intelligence. He realised how ethically important it was to be accurate. If they got it wrong, what happens? The answer isn’t clear.

The panel agreed with Björn when he spoke on how we’re not very good at deception detection at the moment. So governments and employers should calm down for a little while. It can’t be used effectively for interviewing at the moment.

One of the comments briefly touched the topic of whether or not we can or should (not) hurt machines but wasn’t dealt with in depth at this point. The rights of a robot were considered in more detail later.

We then moved on to Mandy. Her blended background of academia and business meant that she could give quite a different perspective and I was more readily able to identity with her responses.

Mandy raised the question “at which point does the engineer say no?”. This is an interesting question that I face even as the owner of a small service business let alone once you’re consider businesses have much more power.

Without providing examples, it was clear that Mandy had experience with businesses that had legal access to data but the act of collating, processing and acting upon it raised ethical questions, particularly over the insights that could be gained from it.

It was an eye-opening process for Mandy who found that while we grow up thinking about right and wrong, the reality of what people perceive as ethical is blurrier than you might expect. What’s more, it can be difficult to resolve the differences in these stances when there’s no standard. Along with her team at IBM, they created a set of questions that encouraged businesses to not overuse data which would encroach on the privacy of customers.

I’ve worked in ecommerce for coming up to 10 years now. When it comes to tracking customers in order to be able to provide targeted advertising, at what point does it move from being a thoughtful and useful approach to communication to abusing our power to monitor and extract more and more information about their lives?

When considering the ethics of AI, Danit served up the example of Chinese poker bots being taught to bluff on a random basis. While this is an AI constrained to a small use case, the idea that even the creator of the programme cannot know whether the bot is bluffing certainly raises the eyebrows of those that are considering of the trajectory of such systems.

The last point on this question was made by Nello and the other panel members concurred with it. As developers and engineers we have to consider that everything that we make will drift to other user cases. I think this is interesting because it’s easy to bring up the example of researchers working on technology that can be repurposed by the military but there’s a whole range of technology that isn’t about life or death but can still be re-used in ethically questionable circumstances.

Do the public understand how machine learning is affecting our lives?

The question was first posed to Mandy who expanded on the breadth of the question saying that “they and we don’t understand”. Like most technology people can see the utility of it but that doesn’t mean they recognise how it works. Judging what the long term effects are is nigh-on impossible for us to say with any certainty.

Danit warned that we have reached the point of no return in terms of convenience that we won’t be able to give it up. The primary concern here is with the level privacy that we are giving up for the benefits that the largest internet companies provide us, in many cases, for free.

Nello shared that one of the challenges with this is that it’s so difficult for us to put into words the importance of retaining our privacy. To many, it just makes us feel nervous and that it is wrong. Storytelling is such a powerful way of conveying messages that he encouraged us to create better stories so that we can share this ingrained feeling that we have.

We need to energise apathetic people to give up the fallacy of having nothing to hide. The best that I have read on this subjective is a Wired article. Along with a retort to those with this approach, it also made me appreciate the need for a non-perfect society. The ability to “get away” with breaking the law is actually an important part of society’s evolutionary process. The ability for people to experiment and gain new opinions on previously outlawed acts can inspire a majority to agree upon a repeal or modification.

People have benefitted from technology and most are happy to trade some privacy for that. They do not realise that giving up personal data isn’t a technological requirement but a component of the business model. So how we do we encourage consumers to choose the business models that cost more but retain their privacy. The parallel that was made was with organic food and consumer choice. Likewise, it’s going to be difficult to change market perceptions and value the different approaches. “Ethical AI”, could be a label used to market technology products to promote social pressure to value it.

We know that people that don’t care, don’t vote. Therefore improving our messaging to educate others is so important. Without engaging them, we won’t reach the activism needed to redirect the flow.

Do users have a right to explanation?

Deep learning algorithms are providing us with the greatest levels of accuracy but at the same time they are opaque in how they came to that decision. This makes many feel uncomfortable. As previously discussed, most people don’t understand how technology works but there is a trust that someone does. A member of the audience posed a question of whether it was unethical for us to use neural networks and whether we should take an oath to focus our research efforts on more transparent algorithms.

One panel member to share their perspective on this was Nello. He shared that many algorithms use statistical models. These are complex mathematical models that aren’t easy to explain - so even though we technically could explain, we can’t practically do so.

Nello said that we expect everything to have a single, explainable cause but this isn’t how the world works. He shared the examples of those the get turned down for jobs, insurance, debt etc. As humans we wish to understand the why so that we understand how we can affect it, i.e. how to be successful upon mortgage re-application. When deep learning or even just statistical modelling is used to make a judgement, giving a story-based reason of why they were rejected becomes unachievable.

There is ongoing researching to build neural networks that aren’t black box as well as an effort to train separate models to generate the story. What is interesting here is that it’s not so important for the story to be 100% precise listing out every factor.

This reminded me of the psychology research by Ellen Langer that noted that people were much more compliant when someone wanted to push in line to use a photocopier if they provided a reason. Even if that reason was “because I have to make copies”.

So on one level, we know that people don’t need, nor want a 100% precise list of reasons and each one’s contributing weight to a rejection. However, on another level, is it ethical for us to use this as an excuse not to trade more efficient models with ones that are transparent.

Nello continued by comparing the importance for transparency versus fairness. When we look at the pharmaceutical market, we care that drugs are fair, i.e. they have the effect that they claim. That’s not to say that we know why they work, we may only have a hypothesis. Either way, the importance is that it’s works as advertised.

To what extent are AI ethics dependent on cultural context?

Danit’s exposure to different cultures, particular Chinese and Japanese, positioned her best to lead the discussion on this question. Firstly, she asked us to consider the paradox of building AI in our own image. We expect them to explain their decisions but can we expect the same from a human being? While I don’t believe this warrants foregoing the discussion, it made me think how people make decisions on a sub-conscious basis before fictionalising after. Even if we believe the story we create, it’s not so different from one AI model being used to make the prediction and another being used to create the story.

Furthermore, Danit expressed the difficulty with coming up with an answer to many of these questions because the social implications are culturally specific.

Researchers and policy writers in Japan are considering robots as “members or quasi members” of society and the rights that they may have within society. This is stark contrast to Chinese and Western philosophies that consider robots to be tools, regardless of their “intellect” and form.

This has legal implications when discussing fault. If you consider robots to be tools, then the creator, maintainer, and/or owner can be liable for its actions. Conversely, if you treat a robot as a quasi-member of society then it can be argued that the fault lies with the robot alone. That’s a dangerous direction to go in.

During the audience questions phase, Virginia Dignum was adamant that AI was an artefact and that we as the creators are always responsible.

This led us on to a short discussion on the ethics of humanoid robots. While the context was not provided, there were notable members of the room that are highly against the idea of humanoid robots because it fools humans into giving robots more rights than they deserve. If you’re interested, read more from Joanna Bryson on why Robots Should be Slaves.

Reflecting on all these items, Danit said that technology is progressing too quickly for us to answer these ethical questions. The progress is too quick for policy makers to get up to speed and agree guidelines.

How is society affected right now?

Despite the avenues that could be explored here, little time was spent on this subject. Clearly privacy is the biggest ethical discussion right now. It is highly likely that national elections have been affected by algorithms, we just don’t know to what effect.

We know that the filter bubble of social networks is a problem and needs to be tackled if we wish to promote effective discussion between different viewpoints.

Nello does not believe that we’re at the risk of death or extinction through AI. The panel retained the belief that AI is a tool but as Nello had mentioned earlier, we have to take responsibility for what we create and what else it could be used for. It’s irresponsible to cower behind the excuse that we conducted our research because of an understanding that it would only be used for “good”.

What recommendations do you have for policy makers?

Nello started us off by explaining that electricity was the key to the industrial revolution and data is the key to the one brought around by AI.

The importance of this means that we need to create better educational material. Mandy raised the issue that the portrayal of AI in movies in the basis for most people’s understanding.

However, recognising that education is the problem doesn’t lead us quickly to a solution. “You can only tell a story when we know what we want to tell but we don’t”. Nello is a researcher in the field and yet he is surprised by the progress in the last 5-10 years.

There is going to be a need for balance between regulation and innovation. While that’s easy to say, it’s much harder to determine where the needle lies on that scale between them.

From Danit’s experience, witnessing policy makers from a range of countries and cultures discuss AI ethics, there’s a frustrating and dangerous preference for innovation over good ethics. She called for them to stifle innovation to protect the people. This is an area of conflict during discussions which can make it hard to make progress.

On the topic of privacy, it was argued that it sometimes is the responsibility of policy makers to protect the people because they have no-one else that will do so. What’s more people aren’t always good at making decisions. There is a precedence for this in law. For example, the Discrimination Act in the UK recognises biases and protects people from themselves (and others). The role, and ultimately the amount of control that a government exerts. in a sensitive subject, particularly in the US.

As mentioned earlier, it’s not that the technology requires us to give up our privacy. For example we can perform the computation on the device to protect consumer data but that has its trade-offs. The public first has to know that’s happening and not only understand the difference, but care about it.

An audience question followed that suggested that regulation could be used to guide innovation so that we invest in research to create better algorithms. Without the need for it, without the investment for research in that area, it’s unlikely that progress will be made.

Once we consider the approach of different cultures to the regulation versus innovation debated, our preference can lean towards innovation because of fear that others will de-prioritise ethics to gain an advantage. The prisoner’s dilemma. Nello said that there is always going to be a country that is suffering economically which will trade ethics in the short-term to fan the flames of the economy.

Audience Questions

Where possible I’ve included audience questions within the previous discussions when applicable. Here are a couple of discussions that were standalone.

Do we need to design failsafes into these systems? This is something that Danit is working on at the IEEE. Much of current research is about how to incorporate machines further into our lives without thinking about the second part of being able to stop using the machine. For example, if an autonomous car gets hacked, (and nothing is secure), there needs to be a mechanical way to regain control. The process to regain control cannot be computer-based because this can also be hacked.

It wouldn’t be discussion about AI ethics without one on the biases in AI. There was a question about whether AI should have gender, indeed some languages don’t. Some recent research by Joanna Bryson shows that regardless of whether a language has gendered articles, our language itself evolves to incorporate the (sexual and racist) biases in our culture. So it’s less a question of whether it should have gender and how we counter these innate biases.

A wealth of things to ponder! Thanks to the AISB for a great event.

View Comments