Newsletter (#004/2025) on Artificial Intelligence by Campos Thomaz Advogados

Alerts, materials, and updates on Artificial Intelligence.

To subscribe, click here.

Law Enacted to Increase Penalties for Crimes of Violence Against Women Using Artificial Intelligence

Violence against women, now also combated through the use of artificial intelligence, takes on new dimensions with the enactment of Law 15.123/25. The law increases penalties for crimes such as using deepfakes to manipulate images and audio, providing greater protection for victims. Learn more.

Virginia’s AI Bill Sets Precedent, but Future Remains Unclear

Virginia’s Legislature became the first in the U.S. this year to pass legislation regulating artificial intelligence systems deemed “high risk.” The High-Risk Artificial Intelligence Developer and Deployer Act establishes requirements for AI tools used in education, employment, finance, and health care, mandating disclosures about risks and performance, and adherence to frameworks like NIST’s or ISO/IEC 42001. Though modeled after Colorado’s 2023 law, Virginia’s bill omits categories like public benefits and lacks requirements for public disclosures. Civil penalties could range from $1,000 to $10,000, with enforcement by the state’s attorney general. Stakeholders are divided: privacy advocates find loopholes that could exempt major industries, while tech lobbyists warn of stifled innovation. Learn more.

China mandates AI-generated content labeling starting September 2025

Beginning September 1, 2025, China will enforce regulations requiring explicit and implicit labeling of AI-generated content. Announced by the Cyberspace Administration of China (CAC), the measures aim to combat misinformation and enhance digital transparency. Explicit labels such as text, audio, or graphics must be visible to users. In contrast, implicit labels should be embedded in the file’s metadata, containing technical information like the file’s origin and the organization’s identifiers. Additionally, removing, concealing, or falsifying these labels is prohibited, and content distribution platforms must verify whether the material published is AI-generated, ensuring the appropriate labels are present. Learn more.

AI Regulation Gains Momentum in Brazil: Chamber of Deputies to Review Bill 2338/23

The President of Brazil’s Chamber of Deputies, Hugo Motta (Republicanos-PB), is expected to forward Bill 2338/23 for discussion soon. Approved by the Senate last December, the bill—introduced by former Senate President Rodrigo Pacheco (PSD-MG)—aims to regulate artificial intelligence in Brazil. At the request of the Workers’ Party (PT), Motta is considering forming a working group to analyze and potentially revise the proposal. The bill was debated in a public hearing hosted by the Coalizão Direitos na Rede on March 14. Learn more.

U.S. Court Sides with Anthropic in Copyright Clash with Music Publishers

AI company Anthropic scored a procedural win after a California federal judge denied a motion from major music publishers, including Universal Music Group, to block the use of copyrighted lyrics in training its chatbot, Claude. The judge ruled that the publishers’ request was overly broad and failed to prove “irreparable harm.” The lawsuit, filed in 2023, alleges that Anthropic used lyrics from at least 500 songs by artists such as Beyoncé and the Rolling Stones without permission for AI training purposes. Learn more.

Brazil’s Financial Intelligence Unit tightens AI rules and holds staff liable for data leaks

Under the Central Bank, Brazil’s Financial Intelligence Unit (Coaf) has issued strict regulations governing its personnel’s use of Generative Artificial Intelligence (GAI). The directive, published in the Official Gazette on March 24, 2025, bans processing confidential and sensitive personal data through external GAI tools, including Microsoft Copilot. It also mandates that all AI use involving Coaf’s information assets be monitored, assessed by the IT Coordination Office, and approved by internal governance structures. Learn more.

Ireland Investigates X for feeding Europeans’ data to AI model Grok

Ireland’s Data Protection Commission (DPC) has investigated how platform X used Europeans’ personal data from publicly accessible posts to train its AI model Grok. The move follows growing concerns about the compliance of generative AI technologies with EU privacy standards. Learn more.

Learn about our AI Ethics as a Service

We have prepared specific material to explain how to engage AI Ethics as a Service (external). Contact our team.

Much more about Artificial Intelligence

Discover our series of content on Artificial Intelligence. Access the full series.

Produced by Alan Campos Thomaz and João Marcelo de Oliveira

*

share

LinkedInFacebookTwitterWhatsApp

newsletter

Subscribe our newsletter and receive first-hand our informative

    For more information on how we handle your personal data, see our Privacy Policy.