The processing of personal data through the use of artificial intelligence in automated decision-making

Publicación de:

Catalina Frigerio

The automation of processes and massive interconnection of digital systems and devices has allowed a large production and capture of data, which are being processed through the use of algorithms. Companies in the most diverse areas are implementing predictive systems that simplify decision-making and help design more effective business strategies. This is what commonly happens with the data processed when using search engines and social networks, such as Google or Instagram, which allow to show those who browse personalized ads with which the user will most likely interact. The use of artificial intelligence facilitates this kind of innovation.

In broad terms, artificial intelligence (AI) refers to technology composed of computer systems that seek to imitate human qualities, such as rationalization or learning. These attributes are achieved by creating a mathematical model that reflects the instructions used to solve a problem.  The instruction set is known as an “algorithm” and is based on the existence of certain variables that involve data. Then, to be executed, the algorithm must transfer into a particular programming language that will allow the computer to understand and interpret the instructions in machine language.

A subtype of AI is machine learning, a type of technology in which the computer system is capable of learning from its own experiences in order to solve complex problems. Machine learning implies an algorithm instructed to create new mathematical models that allow predictions or decisions to be made based on sample data (known as “training data”).  In other words, when we talk about machine learning, it is the algorithm itself that creates new algorithms by identifying existing patterns or similarities in the training database. Thus, when new data is processed, the system can recognize the new patterns based on the algorithm created by itself, which is linking to the learning quality. As a result, systems that incorporate this kind of technology can perceive and relate to the environment, solving problems and generating responses concerning a specific purpose.

AI is widely used to process personal data, that is, data relating to any information concerning natural, identified, or identifiable persons. On the one hand, personal data can be part of the training data used in the creation of new algorithm models by identifying patterns. On the other hand, these mathematical models can then be applied to personal data to make inferences or predictions about people and even influence their behavior. In this way, AI allows automated decision-making based on factors and criteria that are not defined but change according to the database that feeds the algorithm. This type of decision can not only bring efficiencies and reduce transaction costs but is offered as a more precise and theoretically impartial option for specific processes since in principle, it seeks to avoid particular psychological biases typical of human decisions. An example of this is its use to filter job applicants or mark content as violent or false on social networks.

However, while the application of this technology to decision making can bring multiple benefits, it can also lead to scenarios of discrimination, erroneous decisions or abuses, reproducing collective human biases – for example, towards ethnic minorities – or introducing new ones, resulting even in the violations of the individual’s rights.

For this reason, the development and implementation of AI must consider both the complex challenges involved in the legal and ethical aspects. In this regard, many countries, including Chile, have adopted commitments at the international level for an ethical, responsible, and equitable use of personal data. Without going any further, in 2019, Chile signed the Recommendations of the Artificial Intelligence Council, which provides a set of internationally agreed upon principles and recommendations for signatory countries to adopt democratic, inclusive and human dignity-friendly policies in front of the “AI crisis”.  The same applies to the National Artificial Intelligence Policy, which was submitted to public consultation, which ended on January 27, 2021, and its final drafting was completed this past June 7. However, currently, no regulation in Chile supervises AI in a specialized way, but it is subject to the generic regulation established regarding the processing of personal data, that is Law No. 19.628 on the Protection of a Private Life, which dates back to 1997, and – at the constitutional level – through Article 19 No. 4 of the Fundamental Charter.

Regarding the regulation of personal data, in the European Union, the General Data Protection Regulation (“GDPR”) – legal body in which inspired the current Personal Data Bill  (“Bill”), in the First Constitutional Procedure in Congress – it does not explicitly mention AI, but many of its provisions are of particular importance for its deployment.

In particular, concerning automated decisions, pursuant Article 22.1 of the GDPR  “any data shall have the right not to be the subject of a decision based solely on automated processing, including profiling, which produces legal effect on him or her or significantly affects him or her in a similar way.”  This provision introduces a prohibition ex ante for controllers regarding decision-making based solely on automated processing of personal data, without the need for any action on the part of the affected owner.  Therefore, for this prohibition to proceed, four conditions are required:

First, automated data processing must be used to make a decision, that is, to adopt a position towards a person.

Secondly, the decision must be based solely on automated processing – which in principle is not fulfilled when the system is only used as a decision-support tool for human beings, even if it is humans who ultimately make the decision based on the suggestion offered by the system-.

Thirdly, it should involve profiling, i.e., any form of processing of personal data that assesses personal aspects relating to a natural person, in particular to analyze or predict aspects relating to performance at work, economic situation, health, personal preferences or interests, reliability or behavior, or the situation or movements of the person concerned.

Finally, fourthly, it must have a similar legal or significant effect, such as the automatic denial of an online credit application or networked contracting services (recital GDPR 71).

Nevertheless, automated decision-making that produces significant effects will be allowed if any of the following circumstances occur, namely that: (i) is necessary for the conclusion or performance of a contract between the data subject and a controller; (ii) is authorized by law and which also provides for appropriate measures to safeguard the rights and freedoms and legitimate inters of the data subject; or, (iii) is based on explicit consent of the data subject. In cases (i) and (iii) the data controller must take appropriate measures to safeguard the rights and freedoms and legitimate interests of the data subject, which as a minimum include the right to obtain human intervention from the controller, to express his or her point of view and to challenge the decision.

Cases have already been reported in the EU where personal data subjects have exercised their right to challenge decisions make on the basis of automated data processing. For example, in early 2021, the Court of Amsterdam ordered the reinstatement of six Uber workers on the grounds that the termination of their relationship with the company, which would have been adopted by “algorithmic means, was unjustified.” Given the increasing use of AI in decision-making, this jurisprudence will become increasingly important.

In our country, Article 8 of the Bill provides that, as a general rule, the holder of personal data has the right to object to the controller making decisions concerning him based solely on the fact that it is done through automated processing of his/her personal data, including profiling.

However, the preceding rule – which seems to follow closely what the European regulation provides – has certain exceptions in which the right of opposition of the data holder would not proceed.  These cases are (i) where the decision of the controller is necessary for the conclusion or performance of a contract between the holder and the person responsible; (ii) when there is prior and clear consent of the owner; and (iii) when provided by law.

For the cases of exception (i) and (ii) mentioned in the previous paragraph, the Bill provides that the data controller must take the necessary measures to ensure the rights of the holder to (a) obtain human intervention from the controller; (b) to express their point of view; and (c) request a review of the decision.

As it can be seen from the history of its legislative process, the aforementioned article followed the GDPR very closely, and in particle, its article 22. However, there are two differences, which we believe is relevant to mention:

First, regarding the legislator´s approach to the regulated activity.On the one hand, Article 22 of the GDPR assumes that decision-making based solely on the automated processing of personal data and that produces legal effects on it, or significantly affects it in a similar way to the holder of the personal data, is prohibited without the need for intervention of the data subject. On the other hand, Article 8 of the Bill assumes that such activity is legal, but the owner of the data has a right to object, without it being necessary for such decision to produce a legal effect or affect him or her in any way.  It should be noted in this regard that the original working of the Bill established a requirement of standards similar to that of the aforementioned Article 22 of the GDPR, which was subsequently eliminated in the course of the legislative discussion because it was considered to give a negative connotation to the automated processing of data, in addition to considering that the “significant affectation” constituted a somewhat ambiguous expression.

Second, regarding the rights that the data subject has for cases in which his/her right to oppose decisions based on the automated processing of data does not proceed. On this point, Article 22 of the GDPR establishes a rule that appears to be different from that of the Chilean Bill. The first grants the owner of the data – for cases in which the automated processing of their data is appropriate, that is, that it is not prohibited per se – the right to impugn the decision made by the data controller based on that processing.

On the other hand, in the final paragraph of article 8 bis, the draft of the Chilean regulation does precisely the opposite. This rule deprives the data controller of the right to object in cases quite similar to those prescribed by the GDPR to allow – by way of exception – decisions based on automated processing, only giving him/her a right to review the decision. To that end, the rule mentioned above states that “the controller shall take the necessary measures to ensure the rights of the holder, in particular the right to obtain human intervention from the controller, to express his/her point of view and to request a review of the decision”.

Consequently, it seems that the working of our Bill would give on the one hand, a defense of less intensity to the owner of the personal data, and on the other, a greater scope of discretion to the data controller against the decisions that it adopts based on the automated processing of personal data, widely used with AI.

This being the case, the question arises not only as to what this right to review that the owner of the data has, but also if it could mean decreasing the intensity of the protection of people’s personal data against the decisions based on the automated processing of the same, in a world in which AI and algorithms are having increasingly prominence and test the respect for the intimacy and privacy of people.

Accesibilidad [cerrar]