BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Women Stand Against Social Injustice In AI

This article is more than 5 years old.

The need for greater gender and ethnic in diversity in technology is growing from a whisper a decade ago to the roar of a world cup football goal. We can no longer ignore the injustice of a male-dominated algorithmic trade, a despicable parade of inequity and inequality. The naysayers who call out about the discrimination against white males, need to look at the facts of what Joy Boulamwini calls the coded gaze and the increases in algorithmic bias. True, having greater gender and ethnic diversity won’t solve all the problems of unfairness, but it will bleed its greatest excesses. Potential imbalances are less likely to go unnoticed.

In another Forbes article, I interviewed a number of prominent women  who have spoken out on the question of gender diversity and why it is so important in the tech community. Companies with more women in senior roles produce greater shareholder returns and an inclusive environment means a greater chance for AI to do good.

Their responses explain how the retention of women in the workplace can be impeded by a dominant white male identity that reproduces itself and its privilege, by harassment, bullying and setting higher standards for women in coding. Problems of recruitment are also emphasized. Hiring often happens by word of mouth or through homogenous networks that reinforce the status quo. And, ironically, algorithms developed to make shortlisting and hiring fairer are so often biased because of a lack of diversity in the teams developing them. It is a self-perpetuating system.

Women Leading in AI

Noel Sharkey

The newest initiative worthy of note is a new network started in May 2018, Women Leading in AI (WLinAI).  Their explicit goal is to provide female role models and champions to encourage women in tech to grow both professionally and personally, whose members are women from all walks of life, including leading AI scientists, algorithm coders, privacy experts,politicians, charity sector leaders and academics.

They step up to the mark, launching a new report at the U.K. House of Lords: 10 Principles of Responsible AI. These are couched as recommendations for government to regulate Artificial Intelligence and drive its development. They hope to mobilise politics to build an AI that supports our human goals and is constrained by our human values. And ut seems to be working already.

The House of Lords and committee room 17 was packed than  with some standing in the corridor watching through the door. It was moving to see so many women in the room and the great ethnic diversity. I even saw two girls in their school uniforms with expressions of intense interest. This represents a moment of momentum for the movement against inequality in technology.

“For the enormous benefits of technology to be enjoyed by society as a whole, we need to stop churning out algorithms which discriminate against women and minorities,” said Women Leading in AI  co-founder Ivana Bartoletti.

“We want to mobilise politics to get a grip and set the rules around Artificial Intelligence so that it does not discriminate against women and ethnic minorities and is led by our shared human values, not a puppet for men's assumptions”.

Lords select committee on AI takes a lead on transparency and prejudice

Last year the House of Lords Select Committee on Artificial Intelligence, chaired by Lord Clement-Jones, considered the role of AI in a social and economic context, and proposed a set of ethical guidelines. This was long overdue, and their report, AI in the UK: ready, willing and able? provided a much needed counterbalance to the prominent discussions of how much AI will boost the UK economy.  While important, boosting the economy must not take precedence to the rights and lives of all U.K. citizens.

Among the Lords' 74 paragraphs of recommendation to the goverment were considerations of prejudice created by the technology and the need to recruit from diverse gender, ethnic and socio-economic backgrounds.

Developers set the parameters for machine learning algorithms, and the choices they make will intrinsically reflect the developers’ beliefs, assumptions and prejudices. The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct. (Paragraph 120)

We recommend that a specific challenge be established within the Industrial Strategy Challenge Fund to stimulate the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions. This challenge should be established immediately,

The government acknowledged the problem in their response: "We will work to ensure that those developing and deploying AI systems are aware of these risks, and the trade- offs and options for mitigation are understood." But the response buffs off setting this up as a specific challenge in their Industrial Strategy. Instead they say, "we will work with the Alan Turing Institute, which has been working to address these issues." There are no doubts about the credentials of the Alan Turning Institute but a massive problem of growing injustice requires a much larger injection of cash and wider participation to solve the biased algorithm problem.

In response to the Lords Select Committee's request for great transparency in the AI, the U.K. Goverment responded,

Government believes that transparency of algorithms is important, but for development of AI an overemphasis on transparency may be both a deterrent and is some cases such as deep learning prohibitively difficult. Such considerations need to be balanced against positive impacts use of AI brings.

Obviously, we can't make all of AI transparent and it is not always required. But AI that is used to make automated decisions that significantly impact on people's lives must be transparent whether or not it acts as a 'deterrent' or is prohibitively difficult. That is no excuse for unexplainable injustice or prejudice.

At the launch of the Women Leading in AI report

Noel Sharkey

It is most appropriate that the chair of the Lords Select Committe on AI, Lord Clement-Jones, is chairing the launch of the Women Leading in AI report. He could not have been more supportive of the report and said , in a most authoritative tone, that it should inform government.

He told me,

As our House of Lords Select Committee  Report concluded, retention of public trust in AI is crucial if it is to develop in constructive and beneficial ways. Nowhere is this more at risk than when algorithms exibit bias whether as regards gender or race.  We concluded that greater diversity in the  tech industry and workforce and development of audit tools is a partial answer but WLinAI in their new report have taken this a very important step further by proposing a set of ten actions for the Government to comprehensively tackle algorithm bias. I look forward to the debate and a serious and timely response from Government to this vital initiative.

The U.K. is also progressing the ethical use of data with a new Centre for Data Ethics and Innovation (CDEI) which is only just beginning its work. Algorithmic bias is certainly in its headlights.

Noel Sharkey

The Director of the CDEI is also appropriately spoke at the launch and pointed to the importance of the document in pushing forward the debate,

It is great to see WLinAI addressing the important ethical issues in AI. The report is a valuable contribution to the debate and will help inform the work of the Centre for Data Ethics and Innovation as we begin our work looking into algorithmic fairness and bias in AI.

It is encouraging to see that the CDEI will focus on algorithmic fairness and I agree with Alison Gardner, another co-founder of WLinAI, about the  need for a regulatory framework when she says,

Examples of algorithmic bias are revealed every day and unwittingly demonstrate significant discrimination. It is clear now that we urgently need a regulatory framework addressing all stages of AI development, from concept to societal impact. We must govern AI so the benefits are shared by all and we do not end up automating inequality.

The WLinAI report's 10 recommendations

1. Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.

2. Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.

3. Introduce a new ‘Certificate of Fairness for AI systems’ alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.

4. Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.

5. Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.

6. Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.

7. To compel companies and other organisations to bring their workforce with them –by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.

8. Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees

9. To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.

10. To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.

Down the road from here

Noel Sharkey

Jo Stevens Labout Member of Parliament, also speaking, hit the nail on the head that the issue of algoritmic fairness needs to be taken up at the international level. She acknowledged the importance of the 10 Principles document and said that she would make sure that it was entered into the political process.

After the event, Stevens told me that, "AI is no different to any other sector of industry so we to think and apply and assess its impact through the same ethics, transparency, accountability, and values. I hope women will lead the way on this. We are brimming with ideas, expertise and solutions.

Noel Sharkey

Joanna Bryson is a leading voice in AI ethics based at the University of Bath and she  also spoke at the launch event about why these issues are so important and how they show the bias in our collective data. Her discussion of the need for human accountability was strong She told me in advance that,

It's one of the most encouraging things in British governance right now to see such an entirely sensible and timely action like this arising from civil society. The principles proposed by the WLiAI team are extremely sensible and actionable, emphasising not only corporate and civic responsibility, but the complete practicability of keeping ordinary laws and procedures in place for governing these new technologies.

The basic notions of responsibility, liability, and diligence are not altered by how an individual or corporation implements its will technologically -- what changes are the details of how these notions are realised and can be recognised. Technology allows global communication and enhances the visibility of our global interdependencies, but local leadership is essential for innovating paths forwards.  It's fantastic to see this sort of coordinated initiative arise from this vital and often overlooked well of talent.

I agree with Bryson about the importance of these recommendations and their emphasis on fairness and how to achieve it. There is no question that this points us in the right direction.

I feel comforted by the Lords' Select Committee on AI report, the work on Algorithmic bias by the Alan Turing Institute and the emerging Centre for Data Ethics and Innovation. But there appears to be a lack of governmental urgency to solve or regulate against algorithmic injustice. While we are hanging around debating and thinking about these issues, ordinary people are suffering through no fault of their own. Even 10 more minutes of injustice is intolerable.

We need immediate regulation that requires a convincing demostration that algorithms  that are used to make impactful decisions about human lives are not unjust or unbiased or they are shut down. In my view, they should be shut down with immediate effect until accurate methods of analysis and testing are put in place with the burden of proof being on the developers and users of algorithms. Companies may complain that this could stifle innovation. While I have a great deal of sympathy with allowing wide scope for innovative practices, some innovations are really worth stifling.

You might want to look at Joy Bulamwini's Algorithmic Justice League and her Safe Face Pledge if you need more convincing.

As pointed out earlier the UK goverment addressed transparancy and explainability by saying that, "considerations need to be balanced against positive impacts use of AI brings". Surely there can be no positive impacts of injustice and prejudice.The Women Leading in AI report is a strong foot forward to apply pressure on goverment to act now.