Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Local states and governments would be limited in the way they can regulate artificial intelligence As part of a proposal currently before the Congress. AI leaders say that this decision would guarantee that the United States can lead in innovation, but criticisms say that this could lead to consumer protection for fast-growing technology.
THE proposalAs adopted by the House of Representatives, says that no state or political subdivision “can apply a law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision -making systems” for 10 years. In May, the Chamber added it to the full budget invoice, which also includes the extension of 2017 and Services Cup like Medicaid and snap. The Senate has made some changes, namely that the moratorium would only be necessary for declares who accepts funding As part of the $ 42.5 billion High speed, equity, access and deployment program.
IA promoters and some legislators said federal measures were necessary to prevent states from creating a patchwork of different rules and regulations across the United States that could slow down technology growth. The rapid growth of the generator from the OpenAi Cat Exploded on the stage at the end of 2022 led companies to stall technology in as many spaces as possible. Economic implications are important because Breed in the United States and China To see which country technology will predominate, but a generative AI posed confidentiality, transparency and other risks for consumers that legislators have sought to temper.
“”[Congress has] I have not done any significant protection legislation for consumers in many years, “Ben Winters, director of AI and privacy at the Federation of American Consumers, told me.” If the federal government does not act and they say that no one else can act, it only benefits from technological companies. “”
Efforts to limit the ability of states to regulate artificial intelligence could mean less consumer protection around a technology that infiltrates more and more in all aspects of American life. “There have been a lot of discussion at the state level, and I think it is important for us to approach this problem on several levels,” said Anjana Susarla, professor at Michigan State University who studies AI. “We could approach it at the national level. We can also approach it at the state level. I think we need the two.”
The proposed language would prevent states from enforcing any regulations, including those already in books. Exceptions are rules and laws that facilitate things for the development of AI and those that apply the same standards to non -AI models and systems that make similar things. These types of regulations are already beginning to appear. The biggest objective is not in the United States, but in Europe, where the European Union has already implemented Standards for AI. But states are starting to embark on action.
Colorado after a set consumer protections last year, which was to come into force in 2026. California adopted more than a dozen Laws last year. Other states have laws and regulations which often deal with specific questions like Deepfakes Or require AI developers to publish information on their training data. At the local level, some regulations also deal with potential discrimination of employment if AI systems are used in hiring.
“States are everywhere on the map with regard to what they want to regulate in AI,” said Arsen Kounian, partner of the Mayer Brown law firm. Until now in 2025, state legislators have at least introduced 550 proposals Around the AI, according to the National Conference of States Legislatures. During the hearing of the Chamber Committee last month, representative Jay Obernolte, a California republican, pointed out the desire to get ahead of more regulations at the level of the state. “We have a limited quantity of legislative track to be able to solve this problem before states become too far,” he said.
While some states have books on books, not all have entered into force or have seen an application. This limits the short-term potential impact of a moratorium, said Cubun Zweifel-Keegan, managing director of Washington for the International Association of Privacy Professionals. “There is not really any application yet.”
A moratorium would probably dissuade legislators and political decision-makers from developing and proposing new regulations, said Zweifel-Keegan. “The federal government would become the main unique and potentially unique regulator around AI systems,” he said.
AI developers have requested railings placed on their work to be consistent and rationalized.
“We need, as an industry and as a country, a clear federal standard, whatever,” said Alexandr Wang, founder and CEO of the data company. April audience. “But we need it, we need clarity as to a federal standard and have a pre -emption to prevent this result where you have 50 different standards.”
During a Senate trade committee hear in MayThe CEO of Openai, Sam Altman, told Senator Ted Cruz, a Texas Republican, that a regulatory system of European style “would be disastrous” for the industry. Altman rather suggested that industry is developing its own standards.
Questioned by Senator Brian Shatz, a Democrat in Hawaii, if the self-regulation of the industry was sufficient at the moment, Altman said that he thought that certain railings would be good, but “it is easy to go too far. As I learned more about the functioning of the world, I am more afraid that it could go too far and have very bad consequences.” (Disclosure: Ziff Davis, CNET parent company, in April, filed a complaint against Openai, alleging that it has violated Ziff Davis Copyrights in the training and exploitation of its AI systems.)
However, all IA companies do not support a moratorium. In a New York Times OP-EDThe CEO of Anthropic, Dario Amodei, called it “far too blunted an instrument”, saying that the federal government should rather create transparency standards for AI companies. “Having this national transparency standard would not only help the public, but also the congress understands how technology is developing, so that legislators can decide if a new government action is necessary.”
Companies’ concerns, developers who create AI systems and “deployers” that use them in interactions with consumers, often arise from fears that states require important work such as impact assessments or transparency opinions before the release of a product, Kounian said. Consumer defenders said more regulations were necessary and hindered the ability of states could affect privacy and user safety.
A moratorium on specific rules and laws of the state could lead to more problems of consumer protection with the court or by the attorney general of the State, said Kounian. The existing laws concerning the unfair and deceptive practices which are not specific to the AI would always apply. “Time will tell us how the judges will interpret these questions,” he said.
Susarla said that the omnipresence of AI in all sectors means that states may be able to regulate problems such as privacy and transparency more widely, without focusing on technology. But a moratorium on AI regulations could lead to such policies related to prosecution. “It must be a kind of balance between” we do not want to stop innovation “, but on the other hand, we must also recognize that there can be real consequences,” she said.
A large part of the policies concerning the governance of AI systems occurs due to these so-called technological agent rules and laws, said Zweifel-Keegan. “It should also be remembered that there are many existing laws and that there is a potential to make new laws that do not trigger the moratorium but apply to AI systems as long as they apply to other systems,” he said.
A moratorium proposal over 10 years on the laws on the AI of the State is now in the hands of the American Senate, where its Commercial, Science and Transport Commission has already held audiences on artificial intelligence.
With the bill now in the hands of the American Senate – and with more people who become aware of the proposal – the debate on the moratorium has taken over. The proposal has erased a significant procedural obstacle, the parliamentary decision of the Senate that it adopts the so-called rule byrd, which stipulates that the proposals included in a set of budgetary reconciliation must actually cope with the federal budget. The decision to link the moratorium to states accepting the financing of pearls probably helped, told me Winters.
That it passes in its current form is now less a procedural question than a political question, said Winters. The senators of the two parties, including the Republican senses. Josh Hawley and Marsha Blackburn, expressed their concerns about the link of the hands of states.
“I think there is a strong question open on the question of whether it would have passed as currently written, even if it was not prohibited,” said Winters.
Whatever the bill that the Senate approves must also be accepted by the Chamber, where it is adopted by the narrowest margins. Even some members of the Chamber who voted for the bill said they did not like the moratorium, namely the representative Marjorie Taylor Greene, a key ally of President Donald Trump. The Republican of Georgia Posted on X this week that it is “categorically opposed” to the moratorium and that it would not vote for the bill with the moratorium included.
At the level of the state, a Letter signed by 40 STATES PROCESS – of both parties – called on the congress to reject the moratorium and to create this wider regulatory system. “This bill does not propose any regulatory regime to replace or supplement the laws promulgated or currently considered by the States, leaving the Americans without protected from potential AI damages,” they wrote.