New York Pioneers Nation's First Law for Powerful AI Safety
New York poised to enact nation-leading AI safety bill, compelling tech giants to reveal protocols and report security incidents.
June 16, 2025

New York is on the verge of enacting a landmark piece of legislation that would impose first-in-the-nation safety and transparency requirements on the developers of the most powerful artificial intelligence models. The Responsible AI Safety and Education (RAISE) Act, having passed both the state Senate and Assembly, now awaits the signature of Governor Kathy Hochul.[1][2][3] If signed into law, the bill would mandate that major AI companies publish detailed safety protocols and report security incidents, placing New York at the forefront of AI regulation in the United States and setting a potential precedent for other states and the federal government.[4][2][5]
The core of the RAISE Act is its focus on "frontier AI models," defined as those developed by companies that have expended over $100 million on computational resources for training.[4][6] This targeted approach aims to regulate the most powerful systems, such as those created by industry giants like OpenAI, Google, and Anthropic, without stifling innovation at smaller startups.[4][6] Proponents, including the bill's sponsors, State Senator Andrew Gounardes and Assemblymember Alex Bores, argue that such powerful technology requires "commonsense safeguards" to protect against catastrophic risks.[4] These risks include the potential for AI to be used in designing biological weapons, perpetrating large-scale automated crime, or causing widespread harm and destruction.[4] The bill is designed to ensure that companies are not incentivized to cut corners on safety in the pursuit of profit.[4] The legislation has garnered support from prominent AI experts like Geoffrey Hinton and Yoshua Bengio, who see it as a necessary step toward responsible AI development.[1]
The specific requirements of the RAISE Act are comprehensive. It would compel large AI developers to create and publish a safety plan to mitigate severe risks before deploying their models.[5] These plans would need to be reviewed by a qualified third party to ensure their adequacy.[5] Furthermore, companies would be required to disclose any serious security incidents to the New York Attorney General and the Division of Homeland Security and Emergency Services within 72 hours of discovery.[7][5] Such incidents could include the theft of a dangerous model by a malicious actor or a model behaving in a dangerous or unintended manner.[4][1] The Attorney General would be empowered to bring civil penalties against companies that fail to comply, with fines potentially reaching into the millions of dollars, calculated as a percentage of the model's training cost.[4][8] The bill also includes crucial whistleblower protections for employees who report irresponsible risks, shielding them from retaliation.[9][10]
Despite its passage through the legislature, the RAISE Act faces opposition from some corners of the tech industry. Business groups like the Business Software Alliance (BSA) and the Software & Information Industry Association (SIIA) have voiced significant concerns, arguing that the legislation was rushed and relies on vague definitions.[11][12] A primary criticism is that requiring the publication of safety protocols could inadvertently create a "roadmap for bad actors" to exploit vulnerabilities.[12] Industry representatives also contend that the bill unfairly holds developers responsible for the downstream misuse of their models by other actors, an element they deem impossible to control fully.[11] Critics argue that the focus should be on regulating the application of AI, not the development of the underlying technology itself, and that a state-by-state patchwork of laws could stifle innovation and cede the U.S.'s leadership in the field.[13][14] They suggest that a national, Congressionally-led approach would be more effective and consistent.[13]
The debate in New York mirrors a larger, national conversation about how to best govern the rapidly advancing field of artificial intelligence. The RAISE Act is modeled in part after a similar bill in California, SB 1047, which was ultimately vetoed by Governor Gavin Newsom.[2][3] However, the New York bill omits some of the more controversial elements of its California counterpart, such as a mandatory "kill switch" for AI models.[6] Governor Hochul has previously demonstrated a commitment to AI regulation, having signed legislation to establish safeguards for AI companion systems and to combat AI-generated child sexual abuse material.[15] Her decision on the RAISE Act, which she has until the end of the year to make, will be closely watched by lawmakers, tech companies, and safety advocates across the country.[3] The outcome will undoubtedly influence the trajectory of AI governance in the United States, determining whether New York will lead the charge in mandating transparency and safety for the next generation of artificial intelligence.
Sources
[2]
[3]
[4]
[9]
[11]
[12]
[13]
[15]