The advances in artificial intelligence technology create an imperative to protect individuals from discrimination in the context of automated decisions. In order to meet their obligations under Article 26 of the International Covenant on Civil and Political Rights (ICCPR) states need to adopt a holistic and multi-pronged approach to regulation. Anna Lauren Hoffmann coined the term “data violence” to talk about incidents when developers gather data and program artificial intelligence systems in a manner which results in harmful outcomes for individuals. Hoffmann elaborates that, “Those choices are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem natural or given.” A recent example is that of British authorities using an algorithm to predict the final results of students on their A-level examinations. The model favored students from private schools.
Susan Leavy argues that diversity among computer scientists will ensure that developers do not embed societal biases into artificial intelligence systems. (p. 14) The fact that a company employs a diverse team does not guarantee that the team will program the system in a manner which assures compliance with the principle of discrimination. The recent events involving Google highlight that it is insufficient for states to regulate new technologies in order to prevent discrimination. There is a close relationship between how corporations treat their employees, how corporations operate and how they design artificial intelligence technologies. It is therefore necessary for states to regulate how organisations hire personnel, treat their employees, operate and allocate resources. In December 2020 the BBC reported that Google dismissed “highly influential” artificial-intelligence computer scientist Timnit Gebru. Gebru challenged Google’s request to retract a research paper. The paper examined how the models of natural language embedded structural bias against women and persons belonging to ethnic minorities. Gebru describes Google as “institutionally racist.” She recounts working in a toxic environment which hindered employees from underrepresented groups from progressing. An email she circulated to her colleagues stated, “There is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people, you start making the other leaders upset when they don’t want to give you good ratings during calibration.” The company has one of the lowest retention rates for black women.
Article 26 of the ICCPR places an obligation on states parties to adopt measures addressing all these areas of concern. Article 26 of the ICCPR places an obligation on states parties to adopt legislation. The Human Rights Committee in the General Comment 28 explained that the ICCPR requires states parties to adopt legislative measures to ensure the enjoyment of Covenant rights to their citizens. (par. 3) Although the general comments are not legally binding on states, (p. 3-4) states attach “great importance” to them. (p. 5) The Human Rights Committee clarified in Nahlik et al. v. Austria that states parties are under an obligation to protect individuals against discrimination which occurs in the public sphere and in the private sphere with a quasi-public dimension. (par. 8.2) Thus, Article 26 requires states to enact legislation to govern organisations involved in the development and use of new technologies.
Article 26 requires states to adopt a multi-pronged approach to governing organisations and new technologies. First, it obligates states to regulate the development and use of artificial intelligence decision-making processes. The Human Rights Committee in the General Comment 18 states that the ICCPR does not define the term “discrimination.” (par. 6) The term “discrimination” shall mean “any distinction, exclusion, restriction or preference based on sex, race, colour, descent, language, religion, political or other opinion, national or social origin, property, birth, disability…or other status which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life.” (par. 7) Article 26 is applicable to the development and operation of artificial intelligence decision-making processes. The design and operation of an automated decision-making process determines whether the use of these systems results in a violation of the prohibition of discrimination. Article 26 prohibits distinguishing between, excluding or giving preference to individuals based on prohibited grounds. Consider the stages in the development process when the computer scientist defines the problem to be solved (p. 19), gathers the data (p. 78) and labels the data to make it meaningful for the artificial intelligence system. (p. 30) Solon Barocas and Andrew Selbst explain that how the programmer defines the criteria for being a good employee and labels the data can create disadvantage for members of a particular group. (p. 680) Article 26 prohibits a programmer from defining criteria for a good candidate and labelling the data in a manner which results in the exclusion of candidates who belong to a protected group from gaining employment.
Article 26 is applicable to the process of the artificial intelligence system creating a model of the environment (p. 677), calculating predictions about the future performance of candidates (p. 679) and producing a decision. The artificial intelligence system generates the model of the environment based on finding correlations and regularities in the data. (p. 677) Imagine a situation where the system produces an unfavorable decision for the applicants due to finding a specific set of correlations in the data of the applicants. The presence of correlations stems from fact that the applicants took a career break to care for young children. Article 26 applies to these stages in the decision-making process. How the artificial intelligence decision-making system maps the data about individuals onto the mathematical model and how it processes the data determines whether it draws a distinction between the candidates based on the protected grounds. Article 26 prohibits impairing the ability of individuals to enjoy rights on an equal basis with others as a result of using a process which distinguishes between applicant based on prohibited grounds. It follows that states should adopt legislation to require organisations to comply with the prohibition of discrimination when designing and using artificial intelligence decision-making processes.
Second, Article 26 applies to some situations where the use of an artificial intelligence decision-making process produces effects at societal level. (p. 48) In a recent publication I argue that the use of artificial intelligence decision-making processes rekindles the practice of differentiating between individuals based on social class. The operation of an artificial intelligence decision-making process which creates preference for one group while disadvantaging another group falls under the prohibition within Article 26. This provision prohibits conferring a preference on a group based on social origin. Consequently, states should legislate to create mechanisms for carrying out impact assessments to ensure that the impacts of the operation of artificial intelligence decision-making processes on societal level enable compliance with the prohibition of discrimination.
Third, Article 26 obligates states to require corporations to adopt hiring, retention and whistleblower protection procedures which ensure compliance with the prohibition of discrimination. Gebru’s experience demonstrates that an employee can experience adverse treatment on more than one ground. By dismissing Gebru, Google treated her unfavorably based on the protected grounds of sex, race and the holding of opinions. (par. 7) Gebru experienced adverse treatment on the ground of her opinions because Google fired her for circulating information which highlighted the way in which artificial intelligence models disadvantaged ethnic minorities and women. Her dismissal was additionally related to her sex and race. Gebru expressed opinions as a woman of colour. Article 26 ICCPR covers situations where an employee receives lower performance rankings or is dismissed on the ground of raising awareness about the needs of underrepresented groups. In such cases the organisation draws a distinction between employees based on their opinions and thereby excludes individuals with a protected characteristic from accessing opportunities. Such conduct has the effect of impairing the enjoyment of rights on an equal footing.
Fourth, Article 26 ICCPR recognises the connection between institutionally embedded injustices and the adverse treatment of the employees on the basis of their opinions. It necessitates redressing the root causes of discrimination. General Comment 18 states that “the principle of equality sometimes requires states parties to take affirmative action in order to diminish or eliminate conditions which cause or help to perpetuate discrimination prohibited by the Covenant.” (par. 10) The reference to the need to eradicate conditions which cause discrimination provides support for interpreting Article 26 as obligating states to address the root causes of discrimination. States should adopt a variety of legislative measures to protect their citizens from discrimination in the digital context. States should legislate to require organisations to hire individuals from groups which have historically experienced discrimination. States should hold organisations accountable for failing to treat all employees on an equal basis and for failing to protect individuals who advocate for the interests of underrepresented groups. Furthermore, states should legislate to require organisations to ensure that organisational processes and the work environment are inclusive. States should require organisations to allocate resources to their employees to identify deficiencies in new technologies and to detect adverse impacts on individuals who enjoy protection under the prohibition of discrimination. A counterargument would be that Article 26 does not include the requirement for organisations to reform their structures and practices. The purpose of the ICCPR to recognize “the inherent dignity” and “equal rights of all” can be better advanced by interpreting the term affirmative action broadly to include the adoption of positive measures to redress the root causes of discrimination.

Tetyana Krupiy
Tetyana (Tanya) Krupiy is a postdoctoral fellow at Tilburg University in the Netherlands. Prior to this, she received funding from the Social Sciences and Humanities Research Council of Canada in order to undertake a postdoctoral fellowship at McGill University. Tanya has expertise in international human rights law, international humanitarian law and international criminal law. Tanya is particularly interested in examining complex legal problems which arise in the context of technological innovation. Tanya’s work appears in publications, such as the Melbourne Journal of International Law, the Georgetown Journal of International Law, and the European Journal of Legal Studies.
0 Comments