Sponsored by China Society for Human Rights Studies

A human rights-based approach on science and technology from a European perspective Technology is a useful servant but a dangerous master

2022-05-18 09:34:37Source: CSHRSAuthor: Christian Lous Lange
Technology is omnipresent in our world. Nearly every aspect of our live is somehow intertwined with technology 107 . Many new ways to communicate, shop or access information online have appeared 108 . They have made our lives easier and created innovation. For many of us, going online to read or watch the news or listen to music is among the first things we do in the morning. Some of us have wearable devices monitoring their daily activity and sleep 109 . With the COVID-19 pandemic, technology has become even more indispensable.
 
Artificial Intelligence (AI) plays an important part in this context. According to the Oxford Dictionary, AI is the theory and development of computer systems capable of performing tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and language translation.
 
As the European Commission has pointed out in its communication “Fostering a European Approach to Artificial Intelligence” 110 , the potential benefits of AI for our societies are manifold: less pollution; fewer traffic deaths; improved medical care and enhanced opportunities for persons with disabilities and older persons; better education and more ways to engage citizens in democratic processes; a more effective fight against terrorism and crime, online and offline, as well as enhancing cybersecurity 111 . Furthermore, AI has demonstrated its potential by contributing to the fight against COVID-19, helping to predict the geographical spread of the disease, diagnose the infection through computed tomography scans and develop the first vaccines and drugs against the virus with an impressive speed 112 .
 
Digital technology is also a useful tool to express our rights and freedoms. It is used for organising meetings, protests and campaigns, thus enabling people to express themselves 113 . But it can also be used to minimise or violate our fundamental rights and freedoms in the form of censorship, surveillance or internet shutdowns 114. Manipulated algorithms can be used to spread disinformation. 

It is therefore crucial that AI systems are designed in ways that are compliant with human rights standards 115 as human rights protection is only effective if the changes occurring in our society are reflected in the human rights’ protection mechanisms. If the gap between the scientific and technological advancement and the human rights’ protection gets too big, our human rights are endangered. In other words: “the same rights that people have offline must also be protected online, in particular freedom of expression, which is applicable regardless of frontiers and through any media of one’s choice” 116 .
 
But what exactly is a “human rights-based approach on science and technology”? Such an approach requires scientists to secure and affirm human rights through the knowledge and technology they produce 117 . Furthermore, a human rights-based approach strongly affirms that access to scientific information is a human right, as stipulated inArticle 27 of the Universal Declaration of Human Rights.
 
This contribution focuses on the European reply to the challenges and opportunities that come with scientific advancement.
 
At the level of the European Union, several binding legal instruments already exist, regulating the use of AI: the treaties of the European Union, the Charter of Fundamental Rights as well as secondary EU law, such as the regulation on the free flow of non-personal data, several anti-discrimination directives and, in particular, the General Data Protection Regulation. The latter plays an important role in the context of machine-learning algorithms. In fact, the increased use of autonomous decision making by machine-learning algorithms is one of the biggest challenges we are facing today 118 . From a legal perspective, the handling of these machine-learning algorithms, which are able to learn independently and act autonomously, is still unclear 119 . The creation of legislation on algorithms and their use is made more complex by the rapid pace of technological development – entailing the risk of creating regulations that are outdated before even entering into force – and the fact that there is no uniform definition of machine-learning algorithms, as the exact meaning of the term “algorithm” depends on the context in which it is used 120 .
 
One of the main problems with machine-learning algorithms is that it is often not possible to determine why an AI system has arrived at a specific result. Therefore, it may become difficult to assess and prove whether someone has been unfairly disadvantaged by the use of AI systems, for example in a recruitment or promotion decision or an application for a public benefit scheme 121 . The use of AI systems may leave affected people with significant difficulties to correct erroneous decisions.
 
Facial recognition in public spaces can have a very intrusive effect on privacy unless properly regulated. In addition, poor training and design of AI systems can result in significant errors that may undermine privacy and non-discrimination 122 . It is therefore essential that AI-enabled robots and intelligent systems are engineered and designed to meet the same high standards of safety and protection of fundamental rights for traditional technologies provided for by European law 123 .
 
If such machine-learning algorithms are used by the state to make decisions which interfere with the rights of individuals, specific difficulties arise as the concept of rule of law obliges the state to give reasons for a decision 124 . However, an algorithm with a largely incomprehensible proves of decision-making cannot adequately satisfy this obligation 125 .
 
Despite those lacks and difficulties, algorithms are already in use today, both in the state and private sector, largely implemented in e-government, e-commerce, e-learning, e-business and many more 126 . A machine-learning algorithm used by the Austrian Public Employment Service (AMS) can serve as an example for the potential danger of such a system 127 . The algorithm has been used since the beginning of 2019 to divide job seekers into three categories based on general criteria without taking into account the individual circumstances of the job-seeker 128 . This was criticised as a potential threat to certain groups, as it increases the risk of discrimination for women, migrants and people with disabilities because those groups are generally rated more negatively in the evaluation of their job opportunities 129 . Another example for the potentially harmful use of such algorithms is “predictive policing”, where algorithms are used to define zones in which certain risks are more likely to occur, such as an increased risk for burglary 130 .
 
The General Data Protection Regulation (GDPR) is a binding legal instrument in the EU to regulate the use of AI. Article 22 GDPR is of particular importance in this context, as it covers automated individual decision-making, stipulating in its first paragraph that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Therefore, any decision that is made completely automated and without human intervention is, in principle, prohibited 131 . Article 22 paragraph 2 GDPR provides for three exceptions to this principle: (i) if it necessary for entering into, or performance of, a contract between the data subject and a data controller; (ii) if it is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (iii) if it is based on the data subject’s explicit consent. In any case, the data subject must never be subjected to a final automated individual decision against his or her will 132 .Art 13 and 14 GDPR contain specific information obligations for the controller. However, they do not include the disclosure of the used algorithm or the training data of a machine-learning process, hence the benefit for the data subject might be limited 133 . In conclusion, the GDPR is an important regulation of algorithmic decision-making, as it ensures the possibility of a human review, but some lacks remain.
 
Another essential element of the regulation of AI on the European level is the wide-ranging reform of the digital space, proposed by the European Commission in 2020 in the form of a set of rules for all digital services, including social media, online market places and other online platforms operating in the EU: the Digital Services Act and the Digital Markets Act. The aim of these two regulations that will be directly applicable in all EU Member States is to better protect consumers and their human rights online 134 .
 
The new obligations introduced by the Digital Services Act follow the principle “everything that is prohibited offline is also prohibited online”. The Digital Services Act includes, inter alia, rules for the removal of illegal goods, services or content online; safeguards for users whose content has been erroneously deleted by platforms; obligations for very large platforms to take risk-based action to prevent abuse of their systems; transparency measures, including on online advertising and on the use of algorithms to recommend content to users 135 . Every platform reaching more than 10% of the EU’s population (45 million users) is considered systemic in nature and is therefore subject not only to specific obligations to control its risks, but also to a new oversight structure, comprised of a board of national Digital Services Coordinators with special powers for the European Commission in supervising very large platforms, including the ability to sanction them directly 136 . All online intermediaries offering their services in the internal market, whether they are established in the EU or outside, will have to comply with the new rules. Micro and small companies will have obligations proportionate to their ability and size while ensuring they remain accountable.
 
The Digital Markets Act addresses the negative consequences arising from certain behaviours by platforms acting as digital “gatekeepers” to the internal market 137 . Digital gatekeepers are platforms that have a significant impact on the internal market, having the power to act as private rule-makers, which can result in unfair conditions for businesses using these platforms and less choice for consumers 138 . A modern legal framework is needed to ensure the safety of users online as well as an effective protection of human rights online and to maintain fair and open online platform environment 139 . The Digital Markets Act sets out harmonised rules defining and prohibiting unfair practices by gatekeepers, such as situations where users are locked in to a particular service with limited options for switching to another one, and providing an enforcement mechanism based on market investigations. The same mechanism is responsible for keeping the regulation up-to-date in a constantly evolving digital world. It should be noted that the Digital Markets Act only applies to major providers of the core platform services, such as search engines, social networks or online intermediation services, which meet the objective legislative criteria to be defined as gatekeepers. According to the regulation, sanctions 140 will be imposed for non-compliance to ensure the effectiveness of the rules.
 
On 25 March 2022 a political agreement was reached on the Digital Markets Act and on 23April 2022 on the Digital Services Act 141 .
 
The transparency rules of these two regulations have been particularly highlighted by Human Rights Watch 142 as they oblige companies to explain to users how they moderate content, to disclose whether and how automated tools are used and to provide access to data for researchers, including from NGOs.
 
In addition to the Digital Services Act and the Digital Markets Act, the European Commission has set out a European Strategy for AI, which was launched in April 2018. In April 2021, the proposal for a “Regulation laying down harmonised rules on artificial intelligence” was published, but it has not yet been adopted 143 . It focuses on high-risk AI use cases, such as systems used to recruit people or evaluate their creditworthiness or for judicial decision making 144 .
 
Finally, the European Union Agency for Fundamental Rights (FRA) has put a particular emphasis on the protection of human rights in the context of AI by raising awareness, establishing guidelines on how to protect human rights online and by publishing reports on potential or actual human rights violations in the digital world. One important issue raised by the Fundamental Rights Agency is unlawful profiling 145 . Thanks to technical developments, profiling is used in an increasingly wide range of contexts, especially in law enforcement and border control. It consists in categorising individuals according to their characteristics 146 . The FRA has established guidelines to prevent unlawful profiling. Three major points must be respected: (i) to collect and process personal data, law enforcement and border management authorities must ensure that data collection and processing have a legal basis, a valid, legitimate aim, and are necessary and proportionate; (ii) protected characteristics such as race, ethnic origin, gender or religion can be among the factors that law enforcement authorities and border guards take into account for exercising their powers, but they cannot be the sole or main reason to single out an individual as profiling that is based solely or mainly on one or more protected characteristics amounts to direct discrimination, and therefore violates the individual’s rights and freedoms and is unlawful; (iii) in developing and using algorithmic profiling, bias may be introduced at each step of the process. To avoid this and subsequent potential violations of fundamental rights, both the IT experts and officers interpreting the data should have a clear understanding of fundamental rights. Algorithmic profiling must be legitimate, necessary and proportionate. Individuals have a right to be informed, by receiving information on the personal data that are collected and stored, on the processing and its purpose, and on their rights 147 .
 
This overview of the instruments in place at EU level shows us that the EU has undertaken important steps to regulate the use of AI and to ensure an effective human rights protection online.
 
As for the Council of Europe, the legal framework is not quite as developed, but some measures have already been taken, others are in the course of development. The first and most important legally binding instrument in this context is the European Convention on Human Rights (ECHR). As a living instrument 148 , the European Court of Human Rights interprets the Convention in light of present-day conditions. Early on, in 1979, the Court affirmed in the Sunday Times case 149 that article 10 paragraph 1 of the Convention, which stipulates that everyone has the right to freedom of expression, includes the public right to receive information and the duty of the mass media to contribute to it. In this case concerning the harmful side-effects of pharmaceutical products, the Court held that the right to freedom of expression guarantees not only the freedom of the press to inform the public but also the right of the public to be properly informed 150 .
 
In 1997, when scientists succeeded in cloning a sheep for the first time, the Council of Europe proposed a framework for biomedicine, the Oviedo Convention. Even today, it remains the only binding legal instrument for the protection of human rights in the biomedical field which prohibits human cloning 151 .
 
As for AI and in particular decision-making algorithms, the Council of Europe has defined the issue of AI regulation as one of the major challenges of our time. Working with the tech industry, AI designers, NGOs, civil society and various other stakeholders, the Council of Europe is aiming to find a fair balance between the benefits of technological progress and the protection of human rights by 2028 152 . To achieve this goal, the Council of Europe set up an inter-governmental Ad Hoc Committee on Artificial Intelligence (CAHAI) tasked to identify potential regulatory frameworks for AI in Europe and prepare a feasibility study on these frameworks by the end of 2021. The CAHAI fulfilled its mandate (2019-2021) and has been succeeded by the Committee on Artificial Intelligence (CAI) in 2022 153 . The CAI is tasked to examine the ways forward in elaborating an appropriate legal instrument on the development, design, and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, and conducive to innovation 154 . The CAI is instructed to complete such a legal instrument until 15 November 2023.
 
End of December 2018, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe adopted the European Ethical Charter on the use of Artificial Intelligence in judicial systems and their environment 155 . While no European judicial system has introduced AI on a large scale so far, the fact remains that AI is a growing phenomenon in Europe and is intended to improve the efficiency of the quality of justice 156 . Some AI applications deserve to be encouraged (think of tools that make it possible to carry out legal research more quickly and with more relevant results), others however raise a series of questions because of their impact on the office of the judge, fair trial guarantees and the rights of individuals subject to trial 157 . One example is the COMPAS software, used in the United States to determine the risk of recidivism among persons in police custody and whose discriminatory effects againstAfrican-American populations have been revealed by civil society 158 .
 
The Ethical Charter is an AI governance framework setting out five substantive and methodological principles that should guide the integration ofAI tools and services into national judicial systems. It highlights that whether designed with the aim of assisting in the provision of legal advice, helping in drafting or in the decision-making process, or advising the user, it is essential that processing is carried out with transparency, impartiality and equity, certified by an external and independent expert assessment 159 . The five principles 160 on the use of AI in judicial systems established by the Ethical Charter are: (1) the principle of respect for fundamental rights, (2) the principle of non-discrimination, (3) the principle of equality and security, (4) the principle of transparency, impartiality and fairness and (5) the principle “under user control”. The CEPEJ is at the disposal of the Member States, judicial institutions and representatives of the legal professions to assist them in the implementation of the principles of the Charter.
 
In conclusion, we can say that many important measures have already been taken, both at the level of the EU and the Council of Europe. The EU in particular has established itself as the leading power in regulating the use of AI and protecting human rights online. Now, we have to carefully monitor the rightful implementation and effectiveness of these instruments and adapt them if necessary. The European framework could also serve as an example for a better protection of human rights worldwide.
 
It is true that the effective protection of human rights online, especially if machine-learning, decision-making algorithms are used, is more complex than the protection of human rights offline. This is due to the incredible speed of scientific and technological development, the complexity of the technology used, and the wide range of stakeholders involved.
 
But in the end, the question we have to ask ourselves as a society is not that different from the one we are asking ourselves offline: How can we find the right balance between the different interests at stake? We must create and uphold checks and balances to minimise to the outmost possible the negative effects of new technologies, detect, prohibit and sanction human rights’ violations and ensure an equal and non-discriminatory use of technology, so that everyone can enjoy the benefits and opportunities new technologies are offering and no one is left behind.
 
 
 
*About the author: Christian Lous Lange ,Professor of the South City Higher Vocational College in Vienna, Austria, and former judge of the European Court of human rights.
 
107 Karim, R., Newaz, M. S. & Chowdhury, R. M., “Human rights-based approach to science, technology and development : A legal analysis”, Journal of East Asia and International Law, May 2018, p. 164.
 
109 Caprau A., “The need of Regulation : AI”, EALR, 2021, n° 1, p. 10. 
 
110
COM(2021) 205, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Fostering a European approach to Artificial Intelligence, Brussels, 21 April 2021, p. 1.
 
111 Ibid.
 
112 Ibid.
 
114 Ibid.
 
115 Ibid.
 
116 UN Human Rights Council, Resolution on the promotion, protection and enjoyment of human rights on the Internet, adopted in 2012 and reaffirmed several times (fifth resolution with the same title adopted on 13 July 2021).
 
 
118 Egger P., Geringer D., Gindra-Vady G., Gruber C., Paar E., Reiter L., St?ger K., Thalmann S., “Challenges of a Digital Single Market from an Austrian perspective – towards Smart Regulations”, ALJ, 2019, n° 1, p. 38.
 
119 Ibid.
 
120 Ibid.
 
121 COM(2021) 205, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Fostering a European approach to Artificial Intelligence, Brussels, 21 April 2021, p. 4.
 
122 Ibid.
 
123 Ibid.
 
124 Ibid., p. 39.
 
125 Ibid.
 
126 Ibid.
 
127 Ibid., p. 40.
 
128 Ibid.
 
129 Ibid.
 
130 Ibid.
 
131 Ibid, p. 42.
 
132 Ibid., p. 43.
 
133 Ibid.
 
135 Ibid.
 
136 Ibid.
 
137 Ibid.
 
138 For more information on ?gatekeepers”.
 
 
140 Possible sanctions are: fines up to 10% of the company’s total worldwide annual turnover ; periodic penalty payments of up to 5% of the average daily turnover ; in case of systematic infringements of the DMA obligations by gatekeepers, additional remedies may be imposed on the gatekeepers after a market investigation. If necessary and as a last resort option, non-financial remedies can be imposed. These can include behavioural and structural remedies, e.g. the divestiture of (parts of) a business. For more information.
 
141 The European Parliament and the Council of the European Union have to formally adopt the two regulations, in conformity with the ordinary legislative procedure of the EU. The two regulations will probably enter into force mid of 2023 or beginning of 2024.
 
 
144 COM(2021) 205, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions on Fostering a European approach to Artificial Intelligence, Brussels, 21 April 2021, p. 6.
 
 
148 European Court of Human Rights (ECtHR), Tyrer v. United Kingdom, no 5856/72, 25 April 1978; ECtHR, Marckx v. Belgium, no 6833/74, 13 June 1979; ECtHR, Mamatkulov and Askarov v. Turkey, no 46827/99, 4 February 2005; ECtHR, Demir and Baykara v. Turkey, no 34503/97, 12 November 2008.
 
149 ECtHR, Sunday Times v. United Kingdom, no 6538/74, 26 April 1979.
 
Top
content