ARTIFICIAL INTELLIGENCE, ALGORITHMS, AND DATA PROTECTION

1. Artificial Intelligence (“AI”)

1.1 What is AI?

The concept of Artificial Intelligence is constantly changing. In a broad sense, AI refers to machines that are able to learn, think, and act on their own. When faced with new situations, machines can make decisions the same way as animals and humans can.1 The technology is centered around a complex algorithm neural network, cognitive processes, and learning formulas that match originally human skills, relying on logical thinking, environmental analysis, and strategic decision-making

1.2. How important the theme is / Positive aspects of AI

Artificial Intelligence has been discussed in different outlooks and levels of advancement for a few years. Since technology advances at a fast pace, it is currently possible to imagine and visualize machines and computers being programmed to develop features similar to biological human intelligence.

According to data provided by Gartner, 75% of companies will shift from the pilot phase to AI operations in their businesses. Among AI application goals, the main one is to develop solutions and act fast:

  • improved assessment of indicators and decision making
  • increased automation and information processing speeds
  • reduced levels of errors, risk, and operational costs
  • optimized assistance to the public

Advancements brought about by AI in many fields of economics include:

  • Transport and mobility (gps; smart cars)
  • Industrial automation
  • Customer Service
  • Banks and Financial Systems
  • Medicine and health
  • Education
  • Entertainment (games, eSports, Music Apps, Streaming)
  • E-commerce retail
  • Agriculture

1.3 Applications

There are many kinds of artificial intelligence; nevertheless, considering a working organizational structure, we have the following:

AGENT AI 

I – Perceptions / Sensors

The agent perceives its environment through sensors; cameras, infrared detectors, microphones, among others.

II – Information Processing

Based on the environment analysis, the AI agent processes the information and reads it to define a decision making.

III – Acting

After the agent does the reading, it acts according to the understanding of the environment: by processing images, transcribing what is said through voice recognition, moving itself, or making any other operational decision.

IV – Rationing and Decision Making

After the agent works to define how he should act come the learning phase and the definitions on what to do based on the environment analysis, on the information, and on the action.

1.4. Strong and weaker Artificial Intelligence

Strong AI:

Strong AI is based on artificial general intelligence and artificial superintelligence. It is the application of intelligence that equals the human brain’s capacity or even surpasses the human brain’s intelligence and capacity.

Weaker AI:

Weak, or limited, AI is artificial intelligence that is trained for and focused on carrying out specific tasks with no capacity to think on its own

2. Algorithms

2.1 Algorithm

The distinction between an algorithm and an AI algorithm has some nuance. The term “algorithm” is defined as a sequence or group of instructions loaded to solve a problem. This definition of algorithm presupposes that the instructions selected to solve the problem are created by humans, meaning that machines depend on human reasoning.

2.2 Artificial Intelligence Algorithms

AI algorithms can be defined as a sequence of a group of instructions loaded with data that serve as a reference to execute the desired operation in the best way possible. They also run the AI system learning process.

2.3 Machine Learning

Machine learning is a field of artificial intelligence where algorithms are used to organize data, recognize patterns and make machines learn and generate insights without resorting to human pre-programming. In this respect, machine learning algorithms learn from their database, in order to perform tasks autonomously. When exposed to new data, machines adapt themselves and learn autonomously to be able to offer new, dependable answers.

Examples of services run by machine learning: Recommendations provided by Netflix, YouTube, and Spotify. Voice assistants, such as Siri and Alexa; social networking feeds, such as those on Instagram and Twitter.

2.4 Deep Learning

Deep learning is a sophisticated type of machine learning algorithm based on techniques that provide machines with an improved capacity to find the smallest of patterns and deliver a predictable result. The technique used for this kind of algorithm is neural networking, and these algorithms are used in teaching machines to perform activities just like human beings do.

2.5 NLP (Neural Language Processing)

Neural Language Processing is a field of artificial intelligence that helps machines understand, interpret and manipulate human language. NLP helps machines read texts, get information, and create abstracts, among other capabilities.

3. Challenges Faced By Artificial Intelligence Regarding The Protection of Personal Data

3.1. Priciples of the Brazilian Data Protection Law (“LGPD”) Applicable

The LGPD regulates automated processing of personal data, which must observe the general rules set forth in the LGPD.

The processing of automated data, such as that performed in artificial intelligence systems, and its entire development process should be in line with the principles of the LGPD, with an emphasis on the principles of transparency, security, prevention, and non-discrimination.

Transparency:

Ensuring that holders have clear, accurate and easily accessible information on the processing and on the controllers and processors.

Security:

Use of technical and administrative measures to protect personal data from unauthorized access and any accidental or unlawful situation.

Prevention:

Adoção de medidas para prevenir a ocorrência de danos em virtude do tratamento de dados pessoais.

Non-Discrimination:

Treatment for unlawful or abusive discriminatory purposes is prohibited.

3.2 Lawfuness of Processing under the LGPD

In addition to the principles mentioned above, the LGPD establishes a principle of purpose, whereby personal data must be processed for a specific purpose and the data subject should be notified of these facts.

The legal basis is the legal hypotheses that allows the processing of personal data, and must be defined based on the analysis of: i) the type of data to be processed and; (ii) the purpose of the processing.

In this sense, in order to create, develop, and implement an artificial intelligence system, the legal basis for processing personal data should rely on the analysis of the type of data and the purpose of processing to achieve compliance.

There is a difference in the application of legal bases for the processing of personal data and sensitive personal data.

  • Consent
  • Compliance with legal or regulatory obligation
  • Public interest 
  • Defense in judicial, administrative or arbitral proceedings
  • Performance of a contract
  • Legitimate interest
  • Vital interests
  • Health protection
  • Credit protection
  • Fraud prevention

3.3 Automated Decision-Making and Review

The LGPD does neither defines nor regulates the term “solely automated decision-making”. Similar as with the European Data Protection Board, we might understand that solely automated decision-making refers to the ability to make decisions by technological means without human involvement.

With the LGPD, personal data subjects have the right to review the automated processing of their data and that affects their interests, including the profiling of data subjects regarding their profile, profession, consumption, and aspects of their personality. As foreseen by the LGPD, the right of review should allow the following two aspects:

  • Solely automated decision-making
  • The decision affects the interests of the data subject

The purpose of this rule is to create a layer of protection to the data subject, resulting from errors caused by artificial intelligence technology or algorithms.

3.4 Discriminatory Algorithms:

Most algorithms are based on statistical discrimination, differentiating individuals according to group-based characteristics and the probability of these groups acting in a certain way. In this regard, understanding the processes and criteria used to classify such individuals is key, by analyzing the transparency of the process and whether their bias is fair.

Discriminatory algorithms can be based on unlawful discriminatory characteristics and make decisions that imply a biased classification of individuals based on their data. The above-mentioned discrimination may be:

Discrimination by Statistical Error

It results from statistical errors such as: problems in the algorithm code, collection of incorrect data or some other statistical error that generates a failure in the procedure and brings a discriminatory bias.

Discrimination by the use of Sensitive Information

It stems from the use of sensitive information and usually, highlights historically discriminated groups. E.g., nationality, gender, age, and race, among others

Discrimination by Generalization

It results from the mistaken classification of individuals in certain groups, due to the generalized analysis of a given data.

Discrimination that Limits the Exercise of Rights

It stems from the connection between the information used by the algorithm and the realization of a right by the individual, which may be relevantly affected. E.g., Credit Bureaus in Germany who analyzed the frequency of individuals’ access to their information and understood that they would have a higher non-payment risk.

3.5 LGPD Transparency / Algorithm Decision-Making

The LGPD provides for the data controller’s duty to provide, whenever requested, clear and appropriate information to data subjects about the procedure and criteria used in automated data processing. This is called a right to explanation, and is linked to the concept of ensuring greater transparency to data subjects. If the data controller does not meet such regulations, the ANPD may audit to verify discriminatory aspects in the automated processing of data.

However, some issues can make it difficult to ensure transparency for the data subject when it comes to automated decisions, such as:

  • Provisions in trade and industrial secret laws; and
  • Opaqueness of the AI-based decision-making process (“black box issue”).

Artificial intelligence has an “opaque” aspect concerning transparency since the algorithms reflect the bias of its creators and, when combined and trained, reflect on systemic problems in society that are difficult to predict, giving rise to the algorithms being compared to a “black box”.

3.6. Best Practices in Artificial Intelligence

To achieve reliable artificial intelligence, the following elements must be observed throughout the system’s life cycle:

I: Compliance with all applicable legislation and regulations;

II: Ensuring compliance with ethical principles and values; and

III: Stability both from a technical and social point of view, due to the potential risk of damage that artificial intelligence can cause.

Among the points to be observed when using artificial intelligence, the following must be prioritized:

Preparation of Data Protection Impact Assessment (DPIA) for risk analysis and definition of security measures in the use of new technologies;

Use of Privacy by Design tools for the development, implementation, and use of artificial intelligence systems;

Use of mechanisms to facilitate traceability and auditability of artificial intelligence systems;

Compliance with the Brazilian Data Protection Law

Compliance with consumer protection rules and sectorial law; and

Ensuring the rights of data subjects, including providing transparency in relation to the automated processing of data in a privacy policy.

*

share

LinkedInFacebookTwitterWhatsApp

newsletter

Subscribe our newsletter and receive first-hand our informative

    For more information on how we handle your personal data, see our Privacy Policy.