- INTRODUCTION
According to the Central Bank of Brazil’s definition, Fintechs are companies that introduce innovations into the financial markets through the intense use of technology, with the potential to create new business models.
With innovation and technology as its pillars, artificial intelligence has long been a part of the daily life of fintechs.
Fintechs use artificial intelligence as a basic resource to develop new products and optimize existing services, such as automated analysis on granting credit, recommending spending, advancing receivables, and identifying fraud.
Therefore, entrepreneurs must be aware of the bills being proposed to regulate the use of artificial intelligence in Brazil and worldwide and understand how this will impact each sector.
- BILL No. 2.338/23 – OVERVIEW
In May 2023, the latest Bill to regulate artificial intelligence (“AI”) systems in Brazil, Bill No. 2.338/2023[1] (“PL 2.338/23”), was presented for analysis by the Senate.
Inspired by European legislation, Bill 2.338/23 establishes rights for people affected by artificial intelligence technology, determines that artificial intelligence systems made available in Brazil should be self-rated by the developer according to their risk, and defines parameters for supervising and inspecting the activity.
Bill 2.338/23 includes classifications of excessive risk[2], in which AI systems are prohibited, and high-risk classification, in which the systems can be made available if certain criteria are met.
Bill 2.338/23 lists as high-risk tools those used for the following activities[3]:
- assessing the debt capacity of natural persons or establishing their credit rating;
- biometric identification systems;
- management of critical infrastructures, such as traffic control and water and electricity supply chains; and
- assessment of students and workers;
- decision-making on access to employment, education or essential public and private services;
- administration of justice;
- implementation of autonomous vehicles;
- medical diagnoses and procedures;
- investigation, study, and individual assessment of the risk of committing crimes and of personality traits and criminal behavior;
- migration management and border control.
- IMPACT OF BILL 2.338/23 FOR FINTECHS IN BRAZIL
Compliance Requirements
Considering that the use of AI for debt capacity assessment activities, establishing credit ratings for individuals, and biometric identification systems are classified as high risk by Bill 2.338/23, the proposed regulation of AI will directly impact fintechs.
According to Bill 2.338/23, to be used, high-risk tools must meet specific compliance requirements in addition to general governance measures[4], such as:
- drawing up documentation on how the system works and the decisions involved in its construction, implementation, and use, considering all the relevant stages in the system’s life cycle;
- use of tools to assess its accuracy and robustness and to determine potential discrimination;
- carrying out tests to assess appropriate levels of reliability, including robustness, accuracy, precision, and coverage tests;
- data management measures to mitigate and prevent discriminatory biases;
- adopting technical measures to make it possible to explain the results of artificial intelligence systems and measures to provide operators and potentially impacted parties with general information on the functioning of the artificial intelligence model used, explaining the logic and relevant criteria for producing and interpreting results, while respecting industrial and commercial confidentiality;
Bill 2.338/23 also determines that in cases where the decisions of AI systems have a potentially irreversible impact or could generate risks to the life or physical integrity of individuals, a high degree of human involvement in the tool’s decision-making process will be mandatory.
Rights of people affected by AI
Regarding the people affected by artificial intelligence systems, fintechs that use AI in their operations would need to adapt and comply with the rights established by PL 2.338/23 regarding these holders.
According to Bill 2.338/23, any person affected by an AI has at least these rights:
- prior information on their interactions with artificial intelligence systems;
- explanation of the decision, recommendation, or prediction made by artificial intelligence systems;
- the right to challenge decisions or predictions made by artificial intelligence systems that produce legal effects or have a significant impact on the interests of those affected;
- the right to human determination and participation in decisions made by artificial intelligence systems;
- the right to privacy and data protection.
The bill also highlights the right to non-discrimination and the correction of bias, explicitly prohibiting discrimination based on geographical origin, race, color or ethnicity, gender, sexual orientation, socioeconomic class, age, science, religion, or political opinions. On the other hand, adopting criteria for differentiating individuals or groups is allowed when there is a reasonable and legitimate justification considering the right to equality and other fundamental rights.
In the event of serious incidents of security, such as situations in which there is a threat to the life or physical integrity of people, or interruption of the operation or supply of essential services, damage to the environment, or violation of fundamental rights, it is necessary to notify the competent authorities.
According to the current text, suppliers and operators of AI tools will be able to adopt governance programs in line with the legislation. Although not mandatory, this type of practice could demonstrate good faith on the part of the accused and, consequently, be considered in cases of administrative sanctions, for example.
Civil Liability
The system’s degree of risk also determines civil liability in the case of property, moral, individual, or collective damage.
The supplier or operator of the AI system that causes property, moral, individual, or collective damage is obliged to make full reparation, regardless of the degree of autonomy of the system.
In the case of a high-risk or excessive-risk artificial intelligence system (such as AI systems that involve assessing the debt capacity or establishing the credit rating of natural persons or biometric identification systems), the supplier or operator is objectively liable for the damage caused to the extent of their participation in the damage.
When it comes to AI that is not high risk, the agent’s fault for causing the damage will be presumed, and the burden of proof will be reversed in favor of the victim.
Administrative sanctions
In addition to defining liability rules, Bill 2.338/23 provides for administrative sanctions that can be applied for violations of the rules set out in the law and include:
- warning;
- a fine of up to 50 million reais per infraction, or, in the case of a legal entity governed by private law, up to 2% of the income of the legal entity’s group or conglomerate in Brazil;
- publicizing the infringement;
- prohibition or restriction on participating in the regulatory sandbox regime provided for in the Bill for up to five years;
- suspension – partial or total, temporary or definitive – of the activities of the non-compliant company;
- a ban on processing certain databases.
***
It’s important to highlight that in June, the European Parliament approved its draft regulation of Artificial Intelligence (AI) in the European Union (EU), on which PL 2.338/23 is based. However, before it can come into force, the regulation approved (with 499 votes in favor, 28 against, and 93 abstentions) still needs to be ratified by the 27 member-states before becoming law. According to members of the EU itself, the regulation could take a few years to come into force.
Meanwhile, in Brazil, Bill 2.338/23 is still being analyzed by the Senate without urgency. There is no forecast for its vote, and the country will likely wait for the European regulation to be implemented before approving this type of law, just as happened with the GDPR (General Data Protection Regulation) and the Lei Geral de Proteção de Dados (LGPD).
[1] https://www25.senado.leg.br/web/atividade/materias/-/materia/157233
[2] According to Bill 2. 338/23, “the implementation and use of artificial intelligence systems are prohibited: I – that employ subliminal techniques that have the purpose or effect of inducing a natural person to behave in a way that is harmful or dangerous to their health or safety or against this Law; II – that exploit any vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, to induce them to behave in a way that is harmful to their health or safety or against this Law; III – by the public authorities, to assess, classify or rank natural persons, based on their social behavior or personality attributes, using universal scoring, for access to goods and services and public policies, illegitimately or disproportionately. ”
[3] Article 17 of Bill No. 2.338/23.
[4] Article 19 of Bill 2.338/23.