Local

Massachusetts official warns AI systems subject to consumer protection, anti-bias laws

BOSTON — Developers, suppliers, and users of artificial intelligence must comply with existing state consumer protection, anti-discrimination, and data privacy laws, the Massachusetts attorney general cautioned Tuesday.

In an advisory, Attorney General Andrea Campbell pointed to what she described as the widespread increase in the use of AI and algorithmic decision-making systems by businesses, including technology focused on consumers.

The advisory is meant in part to emphasize that existing state consumer protection, anti-discrimination, and data security laws still apply to emerging technologies, including AI systems — despite the complexity of those systems — just as they would in any other context.

“There is no doubt that AI holds tremendous and exciting potential to benefit society and our commonwealth in many ways, including fostering innovation and boosting efficiencies and cost-savings in the marketplace,” Cambell said in a statement.

“Yet, those benefits do not outweigh the real risk of harm that, for example, any bias and lack of transparency within AI systems, can cause our residents,” she added.

Falsely advertising the usability of AI systems, supplying an AI system that is defective, and misrepresenting the reliability or safety of an AI system are just some of the actions that could be considered unfair and deceptive under the state’s consumer protection laws, Campbell said.

Misrepresenting audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner — as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud — could also violate state law, she added.

The goal, in part, is to help encourage companies to ensure that their AI products and services are free from bias before they enter the commerce stream — rather than face consequences afterward.

Regulators also say that companies should be disclosing to consumers when they are interacting with algorithms. A lack of transparency could run afoul of consumer protection laws.

Elizabeth Mahoney of the Massachusetts High Technology Council, which advocates for the state’s technology economy, said that because there might be some confusion about how state and federal rules apply to the use of AI, it’s critical to spell out state law clearly.

“We think having ground rules is important and protecting consumers and protecting data is a key component of that,” she said.

Campbell acknowledges in her advisory that AI holds the potential to help accomplish great benefits for society even as it has also been shown to pose serious risks to consumers, including bias and the lack of transparency.

Developers and suppliers promise that their AI systems and technology are accurate, fair, and effective even as they also claim that AI is a “black box”, meaning that they do not know exactly how AI performs or generates results, she said in her advisory.

The advisory also notes that the state’s anti-discrimination laws prohibit AI developers, suppliers, and users from using technology that discriminates against individuals based on a legally protected characteristic — such as technology that relies on discriminatory inputs or produces discriminatory results that would violate the state’s civil rights laws, Campbell said.

AI developers, suppliers, and users also must take steps to safeguard personal data used by AI systems and comply with the state’s data breach notification requirements, she added.


This is a developing story. Check back for updates as more information becomes available.

Download the FREE Boston 25 News app for breaking news alerts.

Follow Boston 25 News on Facebook and Twitter. | Watch Boston 25 News NOW