An Overview of Artificial Intelligence Ethics and Regulations

In the past 18 months, we have seen an enormous rise in the interest of AI development and activation. Countries are developing national strategies, and companies are positioning themselves to compete in the fourth industrial revolution. With this pervasive push of AI, comes also increased awareness and responsibility that AIs should act in the interest of a human - and to achieve this behaviour is not as trivial as one might think.

This article provides an overview of key initiatives that propose ways to approach AI ethics, regulation and sustainability. As this is a fast evolving field, I aim to update this article regularly. Please leave comments below and I will update the article as soon as I can. At the end of this article, I also comment on keeping AI ethics "pragmatic" towards a concept I call a "Minimum Viable Ethics" or "Minimum Viable Regulation" for AI - this is to ensure that we keep a strong sustainable pace in our AI research and development.

And before we go into details, one might ask, why is there such a fuzz about AI ethics and regulation - why would anyone need to worry about such issues concerning AI, isn't it just like any other technology? There are several reasons why this topic is key to our future society and industry. One is that AI is already making decisions with a major influence on human lives, including human health, fortune and rights. Think along the lines of those AI technologies used in self-driving cars, medical diagnostics, autonomous weapons, financial advisory, automated trading and automated visa applications. All these AIs have substantial ownership of control of a process whose outcomes would normally be accounted to a human. AIs take actions and make decisions that can alter the course of a person's life dramatically. Good ethics, regulations and guidelines on AI provide a basis for trust, and many institutes are working on establishing these and executing one these guidelines to ensure a sustainable future for this industry.

What is AI ethics, AI regulation, AI sustainability?

For the sake of simplicity, I have used the umbrella term "AI ethics and regulation", and under this umbrella, you find many topics. Below are seven key notions associated with AI ethics, regulation and sustainability.

Algorithmic Bias and Fairness. When an AI makes decisions and takes actions that reflect the implicit values of the humans who are involved in coding, collecting, selecting, or using data to train the algorithm. 

AI Safety. An example here is adversarial attacks. For example, Neural networks can be fooled. How can we manage such vulnerabilities in AI?

AI Security. Hacking self-driving cars or a fleet of delivery drones pose a serious risk. Whole electricity nets and transport systems benefit from autonomous decision making and optimisation, they need to be secured at the same time. How can we secure AI systems?

AI Accountability. Who is accountable when an entire process is automated. For example, for self-driving cars, when accidents occur, who can be accounted for? Is it the manufacturer of the car, the government, the driver of the car, or the car itself?

AI Quality Standardisation. Can we ensure that AI behaves in the same way for all AI services and products?

AI Explainability. Can or should an AI be able to explain the exact reasons of its actions and decisions?

AI Transparency. Do we understand why an AI has taken specific actions and decisions? Should there be a requirement for automated decisions to be publicly available?

There are other related topics such as responsible AI, sustainable AI, and AI product liabilities.

Existing global initiatives

I performed a comprehensive review of institutional initiatives and legal frameworks that relate to AI Ethics, Regulation and Sustainability. Institutes, countries and organisations that have proposed guidelines are listed below. Each review is not meant to be a complete description of an initiative. Instead, each initiative is described by 1-2 short facts related to what the initiative is about, what the aim of the initiative is, and selected excerpts of proposed regulations and ethical AI initiatives.

Regulations and Law

Charter of Fundamental Rights of the European Union. The Charter is applied by the bodies and institutions of the Union as well as national authorities when they implement EU law. Good to start from the beginning with fundamental rights.

"Article 21 (1) - Any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited."

All activities in Europe, including AIs that make decisions and take actions are meant to follow the Charter.

European AI High Level Expert Group and the EU AI Alliance. The High-Level Expert Group on Artificial Intelligence (AI HLEG) support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. The AI HLEG will serve as the steering group for the European AI Alliance's work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants' views and reflect them in its analysis and reports.

European Law - General Data Protection Regulation (GDPR). Several articles are relevant, including Articles 13-15, 21 and 35. We cite Article 22 on "Automated individual decision-making, including profiling."

"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

European Parliament resolution of 16 February 2017. Recommendations to the Commission on Civil Law Rules on Robotics, including the study on "Ethical Aspects of Cyber-Physical Systems". An Agency for Robotics and Artificial Intelligence could become topical.

Singapore Ethics Council. A council is set up to advise the city-state's government on the ethical and legal use of AI and data, Ministry for Communications and Information. The Singapore Management University will support the work of the council through a five-year research program. This program investigates "ethical, legal, policy and governance issues" based on AI and data use.

Californian Government - Senate Bill No. 1001. An "AI bot" is not allowed to communicate or interact with another human person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. Just as GDPR, this is a legal regulation.

Federal Government of Germany - Data Ethics Commission. The DCS has recently suggested to add the following to the German National AI Strategy.

  1. “Upholding the ethical and legal principles based on our liberal democracy throughout the entire process of developing and applying artificial intelligence”
  2. “Promoting the ability of individuals and society as a whole to understand and reflect critically in the information society”

AI Ethics Initiatives

Institute of Electrical and Electronics Engineers (IEEE). The IEEE has proposed "Ethically Aligned Design" and describes it as follows.

"To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.

The ethical design, development, and implementation of these technologies should be guided by the following General Principles:

  • Human Rights: Ensure they do not infringe on internationally recognized human rights
  • Well-being: Prioritize metrics of well-being in their design and use
  • Accountability: Ensure that their designers and operators are responsible and accountable"

The IEEE has also proposed an AI ethics certificate for those working with AI.

International Standards Organisation ISO - JTC 1 - SC42. "The ISO committee setup the following structure for AI Standardization to deal with the diverse work program it is embarking on:

  • foundational standards working group
  • computational approaches and characteristics of artificial intelligence systems study group
  • trustworthiness study group
  • use cases and applications study group

SC 42/SG 1 - At the heart of artificial intelligence are the computational approaches and algorithmic techniques that empower the insights provided by the AI engines.  

SC 42/SG 2 - Artificial intelligence is set to join other ICT technologies that have become ubiquitous in our lives. Recognizing this potential for AI, SC 42 took the proactive decision to form a study group to look at trustworthiness and related areas from a system perspective (such as robustness, resiliency, reliability, accuracy, safety, security, privacy) from the get-go.

SC 42/SG 3 - Use cases are the currency by which standards development organizations collaborate with each other. As both the focal point of AI’s role as an enabling horizontal technology and in its role as an AI systems integration entity committee tasked with providing guidance to ISO, IEC and JTC 1 committees looking application areas, it is essential for SC 42 to collaborate with other committees and bring in their use cases."

Software and Information Industry Association (SIIS). “Ethical Principles for Artificial Intelligence and Data Analytics”. 

Organizations should not be indifferent to how the models they develop are used and by whom and how the benefits of their new analytical services are distributed. They should aim for justice in the distribution of the services they make possible”.

Information Technology Industry Council (ITIC). Promoting Responsible Development and Use from the report AI Policy Principles.

“Robust and Representative Data: To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance” discusses the harmful bias. 

International Telecommunication Union (ITU). "As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development Goals. We are providing a neutral close quotation market platform for international dialogue aimed at building a common understanding of the capabilities of emerging AI technologies."

Partnership on AI. "Fair, Transparent, and Accountable AI”. Consider Statement 5: Social and Societal Influences of AI:

"AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions. We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society."

Future of Life Institute. Here is an example of the ASILOMAR AI PRINCIPLES. “The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends”.

AI Now Institute. "The AI Now Institute is an interdisciplinary research centre dedicated to understanding the social implications of artificial intelligence. The institute focuses on four core domains: Rights & Liberties, Labor & Automation, Bias & Inclusion and Safety & Critical Infrastructure.

Nordic Council of Ministers. AI offers significant potential for the Nordic and Baltic countries in business and public sector activities. The council focuses on "Developing ethical and transparent guidelines, standards, principles and values to guide when and how AI applications should be used." and "Infrastructure, hardware, software and data, all of which are central to the use of AI, are based on standards, enabling interoperability, privacy, security, trust, good usability, and portability."

UN Centre for Artificial Intelligence and Robotics. "The aim of the Centre is to enhance understanding of the risk-benefit duality of Artificial Intelligence and Robotics through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities. The main outcome of the above initiative will be that all stakeholders, including policymakers and governmental officials, possess improved knowledge and understanding of both the risks and benefits of such technologies and that they commence discussion on these risks and potential solutions in an appropriate and balanced manner.”

Organisation for Economic Co-operation and Development (OECD). OEC has created an expert group to foster trust in artificial intelligence. The group helps governments, business, labour and the public maximise the benefits of AI and minimise its risks. Notably, Garry Kasparov (who was beaten by supercomputer Deep Blue in 1997) supports the creation of the OECD group.

United Nations Educational, Scientific and Cultural Organization (UNESCO). The initiative focuses on ethical issues related to modern robotics, and ethics of nanotechnologies and converging technologies. The main ethical principals and values addressed are, Human Dignity, Value of Autonomy, Value of Privacy, ‘Do not harm’ Principle, Principle of Responsibility, Value of Beneficence and Value of Justice.

The Montreal Declaration. The Montreal Declaration for responsible AI development has three main objectives:

1. Develop an ethical framework for the development and deployment of AI;

2. Guide the digital transition so everyone benefits from this technological revolution;

3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development.

Comment

I was recently interviewed for the MIT Sloan Management Review. I suggested that organisations currently have a "huge" variety of approaches on how to seize opportunities in the era of AI. This includes of course also approaches on how to use AI ethically, which is part of a complete strategy. As part of a new and efficient strategy, every industry and organisation will need to review their ethics and that of their country or region.

At the same time, we need to stay pragmatic. I am a strong proponent of iteratively improving a strategy and guidelines. Let us have a starting point on a workable AI ethics formulation - something I would call Minimum Viable Ethics or Minimum Viable Regulation for AI. There is little practical point in aiming for an AI Ethics regulation that aims for building a 100% ethical AI, possible more ethical than any individual human. And how would 100% absolute ethics look like anyways?

Do you have comments and suggestions to add more regulations or initiatives? Please comment below, and reach out, also on Twitter.

No alt text provided for this image

Christian Guttmann (PhD) has dedicated 25+ years of his life to advancing Artificial Intelligence research, innovation and industrial products and services. Christian is a strong advocate of the ethical use of AI. He is the Executive Director Nordic AI Institute, Prof (Adj. Assoc.) University of New South Wales, Researcher (Adj.) Karolinska Institute, VP & Global Head of AI Tieto.

Others also viewed