smernice EU za etični razvoj AI
Za nas, pa za druge pedantneže, povzemam smernice etičnega AI razvoja, ki jih načrtuje EU.
The Principle of Beneficence: “Do Good”
AI systems should be designed and developed to improve individual and collective wellbeing. AI systems can do so by generating prosperity, value creation and wealth maximization and sustainability. At the sametime, beneficent AI systems can contribute to wellbeing by seeking achievement of a fair, inclusive and peaceful society, by helping to increase citizen’s mental autonomy, with equal distribution of economic, social and political opportunity. AI systems can be a force for collective good when deployed towards objectives like: the protection of democratic process and rule of law; the provision of common goods and services at low cost and high quality; data literacy and representativeness; damage mitigation and trust optimization towards users; achievement of the UN Sustainable Development Goals or sustainability understood more broadly, according to the pillars of economic development, social equity, and environmental protection10. In other words, AI can be a tool to bring more good into the world and/or to help with the world’s greatest challenges.
The Principle of Non maleficence: “Do no Harm”
AI systems should not harm human beings. By design, AI systems should protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work. AI systems should not threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services. At the very least, AI systems should not be designed in a way that enhances existing harms or creates new harms for individuals. Harms can be physical, psychological, financial or social. AI specific harms may stem from the treatment of data on individuals (i.e. how it is collected, stored, used, etc.). To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling. Of equal importance, AI systems should be developed and implemented in a way that protects societies from ideological polarization and algorithmic determinism. Vulnerable demographics (e.g. children, minorities, disabled persons, elderly persons, or immigrants) should receive greater attention to the prevention of harm, given their unique status in society. Inclusion and diversity are key ingredients for the prevention of harm to ensure suitability of these systems across cultures, genders, ages, life choices, etc. Therefore not only should AI be designed with the impact on various vulnerable demographics in mind but the above mentioned demographics should have a place in the design process (rather through testing, validating, or other). Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly11 AI may be considered part of the principle of avoiding harm. The Earth’s resources can be valued in and of themselves or as a resource for humans to consume. In either case it is necessary to ensure that the research, development, and use of AI are done with an eye towards environmental awareness.12
The Principle of Autonomy: “Preserve Human Agency”
Autonomy of human beings in the context of AI development means freedom from subordination to, or coercion by, AI systems. Human beings interacting with AI systems must keep full and effective self-determination over themselves. If one is a consumer or user of an AI system this entails a right to decide to be subject to direct or indirect AI decision making, a right to knowledge of direct or indirect interaction with AI systems, a right to opt out and a right of withdrawal.13 Self-determination in many instances requires assistance from government or non-governmental organizations to ensure that individuals or minorities are afforded similar opportunities as the status quo. Furthermore, to ensure human agency, systems should be in place to ensure responsibility and accountability. It is paramount that AI does not undermine the necessity for human responsibility to ensure the protection of fundamental rights.
The Principle of Justice: “Be Fair”
For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.
The Principle of Explicability: “Operate transparently”
Transparency is key to building and maintaining citizen’s trust in the developers of AI systems and AI systems themselves. Both technological and business model transparency matter from an ethical standpoint. Technological transparency implies that AI systems be auditable,14 comprehensible and intelligible by human beings at varying levels of comprehension and expertise. Business model transparency means that human beings are knowingly informed of the intention of developers and technology implementers of AI systems. Explicability15 is a precondition for achieving informed consent from individuals interacting with AI systems and in order to ensure that the principle of explicability and non-maleficence are achieved the requirement of informed consent should be sought. Explicability also requires accountability measures be put in place. Individuals and groups may request evidence of the baseline parameters and instructions given as inputs for AI decision making (the discovery or prediction sought by an AI system or the factors involved in the discovery or prediction made) by the organisations and developers of an AI system, the technology implementers, or another party in the supply chain.