×
Menu

Moral Factory

Home

ETHICS FIRST

By Erny GillenJanuary 9, 2019

A Contribution to the Consultation of the Draft Ethics Guidelines For Trustworthy AI, produced by the European Commission’s High-Level Expert Group on Artificial Intelligence

By Erny Gillen, Ethicist, Luxembourg

Introduction

The working document published December, 18, 2018, for consultation until January 18, 2019, makes trustworthy AI it’s north star through ensuring AI’s ethical purpose and it’s technical robustness. 

https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai
https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

My contribution will address the intended ethical purpose issue through eight major systemic questions. It intends to clarify the core directions taken by the AI-HLEG in the working document and thus, to contribute to a successful and ethics first AI strategy within the EU. In the second part I add specific comments.

Before, nevertheless, entering into any systematic and content discussion, I must raise my deep concern about the process and the related risks for a good outcome.

I. Process issues and related risks

The credibility and plausibility of the Draft Guidelines strongly depends on their explicability, to use the term introduced by the HLEG-AI as a precondition for trustworthiness (of AI). The Working Document is silent notably about the rationale of the composition of the HLEG-AI, the reasoning behind operated methodological and content choices, the culture and style of internal dialogue processes, the limitations of this consultation, organised in the midst of major holiday breaks (where even the EU-AGM system stands still), … 

Those pitfalls would — according to the Guidelines — not be accepted if performed by any AI system. But more importantly they do harm to the core intentions of the Guidelines and its accompanying process, as wished in March 2018 by the EGE. I strongly recommend to reconsider the timeline and show true openness for discussing and integrating divergent opinions. Following the EGE the process should foster “a dialogue that focusses on the values around which we want to organise society and the role that technology should play in it”. 

The final Guidelines should, for instance, clearly demonstrate that they do not primarily serve the interests of those called to be members of the HLEG, especially the many directly involved, and prima facie overrepresented AI companies and their academic consultants, but the European citizens.

I’m painfully aware that it is hard to organising a fair process under time constraints and political deadlines. I, nevertheless, urge the HLEG-AI to reconsider the chosen path, notably also because of the weaknesses in its systematics, as I will demonstrate with the following systemic issues to be addressed first.

II. Eight systemic issues remaining vague, ambiguous or unanswered

  1. Who is the moral / legal subject of trustworthy AI?
  2. How should AI users be protected?
  3. How will a human centric approach (vs. a humane approach) distinguish good and bad intentions, right and wrong actions within a human community composed of people of good will and terrorists?
  4. How can be assured that ethics is more than a mere function of and for competitiveness or a risk for AI innovation?
  5. How can the EU promote trustworthy AI made in other parts of the one world?
  6. Which consistent principles should guide the interaction between human users and AI driven systems?
  7. How must the concept of ‘informed consent’ be designed to serve and to protect users?
  8. How could the two guiding lists of principles and requirements be harmonised?

Ad 1: Who is the moral / legal subject of trustworthy AI?

The working document refers to a broad range of subjects while addressing trustworthy AI. Sometimes AI is referred to as the virtual acting subject or the grammatical subject; on other occasions AI seems to be the object of the ethical guidelines. In this case, developers, researchers, producers and even users become the addressees of the guidelines and thus the subject for the trustworthiness of AI. 

It would be helpful to clarify this issue right from the beginning.

I would recommend to accept AI partly as the subject of these guidelines from level 3 onwards, following the classification in footnote 24, and to introduce a well thought through concept of shared responsibility for developers, researchers and producers. In this sense AI would be part of a collective subject for which i.e. a special legal body could be created within a new legal framework for autonomous systems (cf. discussions around the Maddy Delvaux proposition in the EU-Parliament).

Ad 2: How should AI users be protected?

The Draft Guidelines should not mix up users (consumers) and producers. This is paramount for the concept of trustworthiness and for the consistency of the chosen approach, if the HLEG-AI wants to maintain the logic behind the 4 + 1 Principles as inspired by biomedical ethics. Those principles were meant by James Childress and Tom Beauchamp to organise the interaction between the asymmetric competent healthcare professionals on the one hand and the vulnerable patients on the other hand by imposing, according to the tradition of Hippocrates, the burden for the implementation of this specific ethos to the professionals. I’m aware that in some cases the lines between users and producers blur, but that should not happen within the Guidelines. 

The EU has a clear role in consumer protection, that should not be given up in the field of AI, especially if the aim is to promote trustworthiness of AI. Mixing up stakeholders is an unacceptable trap, as is the subordination of ethics to competitiveness. 

Ad 3: How will a human centric approach (vs. a humane approach) distinguish good and bad intentions, right and wrong actions within a human community composed of people of good will and terrorists?

I completely share the concern that AI should aim at “protecting and benefiting both individuals and the common good”. But, the term “human centric approach”, as coined in the working document, is strongly misleading. The human community is diverse and many interests are competing with others. But, there are generally accepted red lines about what is bad and wrong. Those boundaries constitute our societies and protect citizens. AI should not serve those members of the human family who, for instance, follow criminal intentions or put at risk citizens or the society as a whole.

In order to semantically avoid the underlying misunderstanding the HLEG-AI could use the concept of an “humane approach” (in the sense of beneficial or good AI) thus, introducing a partly open criterion to discern which humans to serve.

The definition in the glossary partly addresses the expressed concern by saying: “The human-centric approach to AI strives to ensure that human values are always the primary consideration … with the goal of increasing citizen’s well-being.” If this definition should be maintained I strongly recommend to change “primary” into “main” consideration and to add “in Europe accepted values” before values!

Under the imported Principle of non maleficence the notion of environmental friendliness is introduced out of the blue, thus broadening the scope for responsible AI. The crucial question whether AI should serve Life in general or the common good of the human communities is asked, but remains unanswered. The working document as a whole nevertheless promotes a “human(e) centric approach”. I recommend to add environmental friendliness at the beginning of the document as a concern of and for human life, thus including it into an inclusive “humane approach”.

Ad 4: How can be assured that ethics is more than a mere function of and for competitiveness or a risk for AI innovation?

In the working document there is a tendency to subordinate ethics to competitiveness. I do agree that ethics can and should foster responsible competitiveness. But, any ethic, worth its name, should not be reduced to simply serve a predefined, but limited ethical purpose, like competition. Ethical reflection can’t be domesticated without aborting it, especially within Ethics Guidelines!

I recommend that the normal and healthy tension between competitiveness and ethics should be acknowledged and productively be used for the development of an ever evolving ethical discourse and an evolving discourse about responsible competitiveness. To semantically show this concern it would be worth not to use the term “AI Ethics”, but to talk about Ethics in AI, as it is nowadays and frequently done in Medicine, where the standard of art term would be: Ethics in Medicine and no longer medical ethics.

The working document expresses, again and again, scepticism about ethical reflection or ethical interventions. This is absolutely strange for a document which wants to promote ethical guidelines and the document should be cleaned from those jeopardising assertions.

By the same token, some authors of the working document even seem to be convinced that biases are mainly injected into AI and autonomous systems by human designers and testers. They even suggest the primacy of AI (as a subject) to overcome human born biases, as stated i.e. in the Glossary: “AI can help humans to identify their biases, and assist them in making less biases decisions”. Trustworthy AI certainly can help to identify biases, but it can also produce biases and overlook others. The ethical discernment should not unilaterally or simplistically be delegated to algorithms, as acknowledged in other parts of the working document. 

Ad 5: How can the EU promote trustworthy AI made in other parts of the one world?

Even though I like the “made in Europe” brand and idea, I do not think that the EU can and should limit its ambitions to those AI systems “made at home”. The scope of these Guidelines should be AI systems used in Europe, whether made in China or the US. If the EC really wants to promote trustworthy AI, it should envisage to address all systems used on its territories.

As this ambition is clearly mentioned as a goal for the longer run, the HLEG-AI should relinquish the expression “made in Europe”. 

Ad 6: Which principles should consistently guide the interaction between human users and AI driven systems?

Introducing the four generic principles from the field of ethics in bio-medicine as overarching principles into the fields of AI is certainly of good pedagogical value and easy to communicate. But, exporting this set of principles necessarily also introduces the invisible line of power balance between AI (as a subject or as part of a collective subject) and the users. The analogy between medicine and patients on the one hand and AI and the users on the other hand does not fully match. More thought and research should be invested in this possible, but limping analogy.

Despite diverse criticisms the four principles have shown that they are able to build a consistent and relatively easy to transmit framework. One of their strengths lies in the presumption that they are comprehensive. The working document, inspired by An Ethical Framework for a Good AI Society, adds a fifth principle which from the perspective of the four Principles, by Childress and Beauchamp, could easily be subsumed under their third Principle of Autonomy. The added Principle of Explicability explicitly refers to the concept of “informed consent” which would be typically a part of the principle of autonomy within the original framework. 

In order to be consistent and original (in both senses), I recommend to stay with the four principles and to include the transparency concern (included in the explicability principle) into the third Childress and Beauchamp Principle of autonomy. 

The larger problem of explicability in the sense of intelligibility and explainability should be addressed outside of the four comprehensive principles. It best fits as a conditio sine qua non introduction to the set of the four principles, because all four imminently depend on the explicability of AI as an input for ethical consideration, reflection and decision-making. Thus, the fifth Principle should not be part of the closed list of the four principles, but a preliminary principle conditioning the set of the four principles.

Ad 7: How must the concept of ‘informed consent’ be designed to serve and to protect users?

There are numerous academic and practical discussions around the validity of the concept of ‘informed consent’ and its meaningful understanding in Medicine and Ethics. Nonetheless it works properly in contexts where embedded into an ethic of care, supporting and promoting the autonomy of the weaker part, while simultaneously excluding dominant or paternalistic behaviour exercised by an asymmetrically more powerful part.

As the Draft Guidelines under scrutiny do not distinguish clearly between the different stakeholders and their (legitime) divergent interests, the introduction of the concept ‘informed consent’ jeopardises its original intent. It easily becomes the loophole for all kind of strategies of the many stakeholders. The language chosen by the HLEG-AI in the working document goes exactly in the wrong direction: Informed consent shall not be “achieved” but respectfully sought for, if the concept is introduced to protect the user / patient and not, the other way round, the producer / medical doctor.

Given the obligation of the EU to protect its citizens, this language and possible strategy behind is inadmissible! 

Users should be protected and not trapped, neither by AI nor by Ethics Guidelines! The HLEG-AI Guidelines must show how they efficiently intend to protect all users and consumers, especially the most vulnerable.

Ad 8: How could the two guiding lists of principles and requirements be harmonised?

For the reader and user of the Working Document it would be helpful to deal with one integrated systematic approach. Now there are two lists: first the list of the Four Principles from Childress and Beauchamp plus (according to my proposition) the preliminary Principle of explicability in Chapter B I and then “the ten requirements” as “derived from the rights, principles and values of Chapter I” in Chapter II.

  • The Requirements of accountability, robustness and transparency could be subsumed under the preliminary principle of explicability.
  • The Requirements of Governance of AI Autonomy and Data Governance should be listed under the ethical principle of beneficence. Otherwise this guiding principle is completely missing under the requirements!
  • The Requirements of Safety and (the missing) Environmental Friendliness could be subsumed under the Principle of Do not harm.
  • The Requirements of Respect for (& Promotion of) Human Autonomy, the Respect of Privacy and Transparency (in the above mentioned sense) would be well understood under the Principle of Autonomy (from list one).
  • The Requirements of Design for all and Non-Discrimination would be massively enhanced if listed under the Principle of Justice and Fairness, thus avoiding simplistic egalitarian language.

II. Specific Comments:

1) The definition of values and the use of the word “value” throughout the working document lack consistency and clarity!

The example given to underpin the ethical purpose circuit is wrong when it comes to the value: the informed consent isn’t a value! The protected value is freedom or self-determination! Footnote 2 refers to values as things which is wrong again. Values are attitudes, inclinations, habits, intentions: they describe concepts, but no things!

2) Under the critical concerns raised by AI, I suggest to add credit scoring and robotic advise i.e. in the finance industry.

3) The conclusion under ‘Governance of AI’ is inconsistent and dangerous.

It is stated that the users preferences and the “overall wellbeing of the user” (which might be contradicting under ethical analyses) should be promoted by systems that are tasked to help users. As this conclusion is about Governance, the HLEG-AI should recall strongly that those preferences should be conditioned by the given laws and rules, standards of arts (in medicine and nursing for instance). I would recommend to delete the last sentence!

Respect for Human Autonomy is a key concept throughout the working document. Human faculties can certainly be enhanced, but human’s autonomy should be promoted. Enhancing one’s autonomy from outside contradicts one’s internal autonomy! I recommend to replace “Enhancement” by “Promotion of Human Autonomy” in order to be sound and consistent. 

4) Lethal Autonomous Weapon Systems (LAWS) should be banned

Under the critical concerns the description omits to refer to many requests by the civil society and researchers to ban lethal autonomous weapon systems. That option should at least be mentioned, if not even promoted by the HLEG-AI! This should, according to my ethical convictions, be the position of the HLEG-AI.

5) The missing research in Ethics

Under the Non-Technical Methods to ensure trustworthy AI, I recommend to prominently add Research in fundamental and applied Ethics in the many fields of AI. Ethics is a philosophical discipline which showed great ability to evolve alongside ever changing environments. There is a great need to identify researchers able to dive deep into the complexity of AI and the complexity of Ethics in order to come up with helpful concepts. There is also a need for the regulators and public administrations to deepen their understanding of modern Ethics as an evolving science to be implicated in the critically needed policy designs for trustworthy AI in Europe. 

Ethics in AI should not be promoted as an internal and mere technical specialisation, but as a professional, multidisciplinary and philosophical approach in the fields of AI. Ethics in AI, as a term, serves that purpose much better than the wording “AI Ethics”. 

6) AI Review Board, ethical reflection and ethics committees

In Chapter III, ethical Review Boards are mentioned. In the fields of Ethics in Medicine, IRB’s (Institutional Review Boards) are clearly distinguished from Ethics Committees. Review Boards make sure that the standards of arts are respected and validate certain projects from researchers. Thus they work alongside given rules, whereas (Hospital or National) Ethics Committees deal with the grey zones in individual or policy domains. They provide advise to medical doctors, patients, politicians with good arguments and proposals, but they never decide upon the right or wrong choice. The ultimate choice remains with those responsible to act. 

I recommend to use and to adapt the good practices from institutionalised ethical bodies and functions for the fields of Ethics in AI.

III. Conclusion:

My comments on the Draft Ethics Guidelines, open for consultation, want to contribute to successful and consistent Ethics Guidelines for AI in the sense the EGE asked for in its March, 2018, Statement: The process “should integrate a wide, inclusive and far-reaching societal debate, drawing upon the input of diverse perspectives, where those with different expertise and values can be heard.” 

The current tone of the Working Document does not properly reflect the potential existential risks of AI as largely perceived by the general public, major scientists and philosophers. Thus, it undermines its intention to promote trustworthiness of AI. The systematics behind the principles is not (yet) consistent and sound and should urgently be addressed before entering into wording and language issues. European Ethical Guidelines for AI should put ethics first and not competitiveness, because Europe has shown and shows that it is able to combine both without giving up the one or the other. Social ethics is more than the sum of individual moral choices; it’s about an ethic of care and solidarity.Consistent and well thought Ethical Guidelines with the ambition to introduce trustworthiness as the North star for AI used in Europe are very much needed and should not be sacrificed under the pressure of lobbyists, short-term political agendas or mere time constraints.

Dr. Erny Gillen

Thank you for sending your comments or requests to my office: 

office@moralfactory.com