Sponsored by China Society for Human Rights Studies

Legal Regulation of Artificial Intelligence-related Human Rights

2022-05-19 10:38:57Source: CSHRSAuthor: Xiao Junyong
I. The Actual Impact and Potential Threat of Large-scale Application of Artificial Intelligence to Human Rights
 
The development and application of artificial intelligence (AI) promotes the creation of social value, and also brings uncertainties to human rights governance. New problems gradually arise in the practices of human rights governance such as technologically disadvantaged groups, racial discrimination, biometric identification and privacy protection. All these need to be regulated by human rights governance system to adopt new perspectives and new programs, so that technology can return to its basic nature of good. The development of algorithms, computing power and data has promoted the third wave of AI, enabling it to move from the laboratory into the actual operation of society and become part of the infrastructure in the digital society. Now AI is widely applied in online platforms, such as information pop-ups (including user portraits, search rankings, content curation, etc.), identification and classification of people (face recognition, qualitative screening), community governance (content monitoring and deletion, etc.), voice and image recognition, content synthesis (text, media, images, etc.), automated decision-making. AI programs have gradually penetrated into all fields of social life, which is driven by the improvement of its commercial efficiency. It gradually replaces traditional programs with advantages such as low price, high efficiency, and speed. Shifts in tools and solutions lead to new social structures. On the one hand, the rights of individuals are actually affected or potentially threatened. On the other hand, low-skilled groups (i.e., those who lack technical equipment or technical ability) and minority groups are neglected in the application of AI and their rights cannot be normally exercised or are even impaired. These threats involve matters covered by human rights governance such as privacy, minority protection, and freedom of expression, and do not include new rights that have not yet been covered or that are difficult to generalize, such as the right to be forgotten, and the right to know and appeal of those affected by intelligent decision-making. All kinds of dilemmas entail the change in the current scope, model and system of human rights governance.
 
(I) AI application and its threats against human rights
 
At present, the practice and application of AI technologies around the world have caused potential threats or substantial damage to human rights such as freedom of opinion and expression, privacy, equality, and access to remedy, including:
 
1. AI algorithms for monitoring and deleting content. AI algorithms that can hardly understand natural language in an accurate manner can easily misunderstand the true meaning of user content, so they often fail to recognize racial discrimination or hate speeches, or mistakenly take punitive measures such as deletion of user speeches. In particular, the accuracy of their judgments on less-used language is low, and the name of a certain ethnic group is deemed by them as inappropriate. For example, Instagram’s community-regulating AI identifies “Mexican” as a derogatory term.
 
2. Biometrics. The recognition rate of AI for minority groups is low, and traditional solutions have been supplanted due to technological upgrades. As a result, minority groups are not only excluded from the “AI era”, but also are deprived of the past solutions to a certain extent, which brings many inconveniences to them. In the early stage of the application of health codes amid the COVID-19 pandemic, low-skilled people who use non-smartphones or are unfamiliar with how to use smartphones are restricted from entering public places. But the function of filling in and reporting on other people’s behalf has been added technologically, and the ID card can be used as an alternative way of registration in practice.
 
3. Recommendation and screening based on the user portrait technology. First, information pushers use AI and big data technologies to create user portraits, and push different information to special groups according to the portraits; for example, allowing housing and job advertisements to block the elderly, women, and ethnic minorities, or keeping pushing only messages in favor of one political party to people with neutral political positions. Second, user portraits are used for screening. Such screening involves race, ethnicity, and gender (or age); for example, in recruitment software, employers can screen out employees of unfavorable skin colors, or employees of a specific gender, which facilitates discrimination by users and creates new discriminatory situations.
 
4. Decisions made by AI are difficult to appeal against. This is because they are not easily detected by the relative person. Even if the relative person knows, it is difficult to appeal or there is no appeal channel.
 
AI technologies, as tools, have greatly improved the efficiency of each subject’s behaviors, but the service “upgrade” by replacing traditional tools or solutions obviously has its own structural shortcomings, and there are still various types of misuse, abuse and even ridiculous uses of AI in real-life scenarios. The bigger problem is that we have not yet been able to accurately determine the degree to which AI harms and promotes human rights, and there is even no suitable judgment tool.
 
II. Reasons for Possible Human Rights Violations by AI
 
There are many human rights issues arising in the current information environment, and the crux of which can be attributed to:
 
1. The opinions of marginalized groups can hardly be heard or have no influence. It is difficult for minority groups to be fully visible in the datasets used for training AI. As those groups are small in number, algorithm service providers often are not very motivated to systematically adapt their services to the needs of those groups.
 
2. The status gap between the user and the service provider is too large. Users do not have negotiation opportunity and ability, and can only choose “accept” or “leave”; they cannot “bargain” with the terms provider, but only agree to the terms as a whole or not to use the service.
 
3. The users are scattered, which makes it difficult to make an accurate assessment of their human rights protection. Scattered user distribution means difficulty in collecting data on the overall human rights protection of AI service users. These data can only be accessed in a low-cost way from the system data of service providers who often do not have enough incentives (motivations) to evaluate AI algorithms for human rights protection.
 
4. Supervision mechanisms and norms are lacking. The phenomena of obstructing the exercise of human rights and infringing on human rights are not uncommon in the free development of AI applications, but there is a lack of regulatory bodies and regulatory norms. Besides, AI service providers also lack service and supervision awareness, disregard human rights protection, and seldom provide corresponding rules and correction channels while providing services.
 
III. Regulatory System to Prevent Human Rights Violations by AI
 
There are two ways to carry out legal regulation for the purpose of AI-related human rights protection. One is administrative management-oriented supervision — by formulating a set of standards coupled with monitoring and punishment mechanisms to ensure that human rights are not violated; the other is the market norm-oriented guidance — leveraging market competition to knock out AI providers who violate human rights, as it enables AI service recipients to judge by themselves whether their human rights have been violated or to make choices among various rights.
 
(I) The regulation scheme of administrative supervision
 
The report of the Special Rapporteur of the United Nations High Commissioner for Human Rights494 presents a typical supervisory scheme, including numerous value declarative clauses like “urging Member States to regulate related issues, calling on Member States to reduce algorithmic bias in commercial, military and government products and tools, and requiring States to combat all types of digital technology-related discrimination” which are empty and infeasible. It also includes specific plans for monitoring like “States should formulate human rights standards and evaluation plans for AI systems, collect information and data to determine the conditions of human rights protection, cooperate with companies to ensure with necessary measures that ethnic minorities are adequately represented and that victims of human rights violations receive timely and just remedy, and other specific and feasible measures”. Except ensuring that victims receive timely and just remedy which is covered by judicial justice, the other three are all solutions based on the technical characteristics of AI.
 
1. The establishment of substantive standards for human rights protection by AI algorithms, as well as the corresponding review and punishment mechanisms, is an implicit response to algorithmic decision-making. Through standardized methods, the algorithmic decision-making mechanism can be visualized and even quantified to a certain extent, which serves as a basis for further regulation of the algorithm application. Punishment norms in line with the review and supervision mechanism change the behavioral cost of algorithm providers in a mandatory way, forcing them to choose technical solutions or commercial means that can better protect human rights. This is how the entire administrative supervision system plays its role in the real world.
 
2. The collection of information and data by administrative means to evaluate human rights protection is not an independent measure, but to a certain extent is one of the specific evaluation channels for human rights protection standards. Because the algorithm provider does not have enough motivations and the algorithm user has motivations but cannot afford the cost of information collection, if administrative bodies do not provide such public services, the acquisition of human rights protection data would not be realized, and other measures in the governance system will also lose their legitimate basis of practice.
 
3. The rights and interests of minority groups should be fully expressed through administrative intervention. Granting minority groups a channel and the legitimacy to express their demands in a normative form can reduce the behavioral cost of expressing their demands and improve the chances of their demands being adopted, and can diminish the possibility of unequal treatment in algorithmic decision-making between the minority and the majority due to their different demands.
 
(II) Analysis of the cost of administrative supervision
 
The benefits of legal norms are an important component of their legitimacy. If the value of resources consumed by society to implement a set of legal norms exceeds the value brought by their implementation, such laws would surely be rejected by society. The rational choice of society is to choose the normative scheme that consumes the least resources and obtains the best effect among the possible schemes. This is especially true in the field of AI-related human rights regulation. The implementation cost of norms will largely determine the degree to which it may be implemented. An excessive normative cost will not only make it difficult to operate the norms, but even destroy the original order and cause harm to society. The costs of regulating AI-related human rights are mainly as follows:
 
1. Difficulties in identifying and judging human rights violations by administrative means. We should identify the normal distinction and discrimination, as the data sources and results expression in the statistical form ofAI algorithms make it difficult to distinguish between the normal classification and discrimination.
 
2. Difficulty in building the supervision system. The subject of value judgment in the administrative supervision system is misplaced. In the case of administrative supervision, the government needs to weigh the values of various conflicts, and the consequences of this are borne by the recipient of the AI algorithm. Thus, this judgment features misplaced subjects. The choice of the rights of the parties through administrative means is at least an inefficient behavior, and this judgment cannot take into account the value of each right to each individual, which will cause a loss of social value.
 
3. Difficulty in supervision in the implementation of norms. The supervision of AI algorithms requires a high degree of professionalism which is related to the implicitness of AI algorithmic decision-making. The calculation and identification of algorithms are digital, and the specific identification standards are expressed in the form of complex mathematics and assembly languages. If non-practitioners are entrusted with supervision, violations by professionals may escape their identification, or they may make wrong assessments due to a lack of expertise. If professionals are allowed to supervise, the classic dilemma of regulation and corruption will arise again.
 
In the indirect human rights regulation system that “laws regulate the market order, and the market order forms an appropriate human rights protection system”, the protection of AI-related human rights is almost “invisible”. As a dynamic equilibrium forged by market entities in market behaviors, this protection system has no quantifiable indicators. The promotion of human rights protection mainly by spontaneous market order but without supporting regulation systems is in itself unstable and baseless.
 
(III) Guarantee of the right to know, choose and appeal
 
Opacity in AI-related human rights governance is multi-faceted, including the opacity of technology, the operator’s value orientation, and credibility of governance practices which in turn lead to the opacity of whether and to what extent the human rights of service recipients are violated. It is necessary to have indirect standards to measure and examine the effect of administrative supervision, to ensure and maintain the right of service recipients to choose, and to ensure the normal operation of administrative supervision and market regulation mechanisms. Such indirect standards are the most basic right to know, the right to choose and the right to appeal of AI service recipients.
 
The right to know means that the recipients of AI services have the right to know about the attributes ofAI services they receive, the identity of the service provider, the information to be collected for this service, the privacy protection policy from the service provider, the complaint channels and other substantive matters, as well as procedural matters such as the service provider’s obligation to explain its terms for the recipient.
 
The right to choose means that service recipients have the right to choose whether to accept AI services and who should provide AI services. The right to choose has different connotations in administrative application scenarios and market scenarios. In administrative application, as mentioned above, at least “2+1” options, that is, two or more different AI programs and the traditional solution, must be provided for the recipients. To choose among multiple programs is also part of the right to know, so it is necessary to explain the characteristics of each different program and the statistics of its use to choosers as much as possible.
 
The right to appeal is the right of the recipient of AI services to appeal to the service provider or administrative agencies when he or she rejects the judgment of AI and believes that it infringes on his or her own rights. Appeals not only include asking the algorithm provider to re-judge when the damage has not yet been done and provide judgment bases, requiring stopping the infringement and compensating for infringement, etc., but also include explaining the situation to the government-designated political agency and seeking remedy.
 
Through the protection of the right to know, the right to choose and the right to appeal, market players can choose impartial and non-discriminatory AI service providers according to their perceived value tendencies of service providers. And, administrative supervision agencies can take the condition of these three rights as a standard in making a basic judgment on the current state of AI-related human rights governance which can serve as a basis for further assumptions about regulatory standards and regulatory schemes.
 
IV. Conclusion
 
This paper, focusing on the human rights governance system, chooses a legal regulation approach for AI-related human rights in the midst of realistic and urgent threats. Human rights violations have already occurred in the actual application of algorithm technologies. In order to deal with the adverse impact of the use of algorithms on the human rights governance system, it is indeed “efficient” to regulate algorithms with market order, but the market can also fail. Therefore, it is still necessary to strengthen administrative supervision and form an AI-related human rights governance system with dual norms of “market + administration”. No matter how the choice of norms acts on the norm system, the implementation and value of norms ultimately depend on the design, drafting and implementation of specific legal provisions.
 
 
*About the author: Xiao Junyong, Professor and doctoral supervisor of the Law School of Beijing University of technology, executive director of the center for science, technology and human rights of Beijing University of technology, and national human rights education and training base.
 
494 Promotion and protection of the right to freedom of opinion and expression.
Top
content