Abstract: With the development and application of automation technology, many decisions are no longer made by humans, but completely by machines or with the assistance of machines. Automated decision-making is changing the traditional human-machine relationship model in which humans are the subject and machines are the object, thus posing a challenge to human dignity. Automated decision-making often ignores the concept of due process and violates the principles in due process, i.e., neutrality, openness and participation. The approaches of prohibitions and granting rights in avoiding automated decision-making have certain differences in regulatory objectives, methods, and limits, but they both ignore the balance of interests between data subjects and data controllers to a certain extent. The introduction of the “Balance Theory” can change our understanding of the validity, properties, and norms of an individual’s right to be protected from automated decision-making, and provide a methodological basis for the realization of this right. The “Balance Theory” requires the establishment of a hierarchical protection mechanism from automated decision-making, a balanced protection mechanism throughout the whole process, and a coordinated protection mechanism between hard laws and soft laws.
Keywords: Artificial Intelligence; Algorithm; Automated Decision-making; Human Dignity; Due Process; “Balance theory”
*About the author: Zheng Zhihang, doctor of laws, deputy director of Shandong University Human Rights Center (National Human Rights Education and Training Base), professor, doctoral supervisor, and vice president of Shandong University Law School.