The AI Policy Crossroads: 6 Key Legislative Developments Business and Tech Leaders Need to Watch

By Zack Huhn, Enterprise Technology Association (ETA)

Artificial intelligence has officially entered the policy fast lane.

In just the last six months, a wave of new legislation and regulatory proposals has swept across the U.S. — from sweeping federal bills to state-level trailblazing on AI safety and deepfake enforcement. These shifts are shaping the operating environment for businesses, developers, public sector agencies, and consumers alike.

If you lead a business or manage technology operations, now is the time to get ahead of the curve.

Here’s a quick breakdown of six important AI policy developments you need to know about — and what they mean for your strategy moving forward.

1. New York Passes the RAISE Act — First State Law Targeting “Frontier” AI Models

Description:
New York became the first state to pass legislation aimed squarely at the most powerful AI models (like GPT-4 or Claude). The Responsible Artificial Intelligence in State Entities (RAISE) Act requires model developers to conduct risk assessments and report potential hazards — such as threats to human life or major economic disruption — before releasing their products.

Reference: Economic Times

What You Need to Know:
This could become a national blueprint for AI safety regulation. Expect pressure for transparency, auditability, and responsible deployment protocols for high-impact AI systems.

2. California’s AI Governance Report Sets the Stage for Policy Action

Description:
After vetoing SB 1047 last year, Governor Gavin Newsom assembled a team of AI experts (including Dr. Fei-Fei Li) to shape California’s next move. The result: a 53-page AI governance framework proposing third-party audits, whistleblower protections, and incident reporting.

Reference: The Verge

What You Need to Know:
California isn’t backing down. The recommendations may shape upcoming state legislation—and influence nationwide norms around AI transparency and independent oversight.

3. Federal Bill Proposes a 10-Year Moratorium on State-Level AI Regulation

Description:
Tucked into a larger federal funding bill, lawmakers in the House have introduced a proposal that would ban state and local governments from regulating AI for the next 10 years. The goal? Avoid a fragmented patchwork of state laws that could stifle innovation.

Reference: Financial Times

What You Need to Know:
This controversial move could centralize AI regulation at the federal level, but also risks delaying critical safeguards while national standards are still under development. Keep an eye on how this plays out — it may reshape your compliance roadmap.

4. New Jersey Criminalizes Deepfake Deception

Description:
New Jersey passed a new law criminalizing the creation or sharing of AI-generated deepfakes for the purpose of deception, including during elections or to defame individuals. The law includes both criminal penalties (up to 5 years in prison) and civil liability options.

Reference: AP News

What You Need to Know:
If your business deals in media, communications, or elections, expect increased scrutiny around AI-generated content. Proactive content labeling and media authentication will likely become best practice.

5. Montana Restricts Government Use of AI

Description:
Montana’s HB 178 bans certain uses of AI by state and local governments, especially in surveillance, behavioral manipulation, or enforcement without human oversight. It took effect immediately.

Reference: Wikipedia Summary

What You Need to Know:
This sets a precedent for AI guardrails in public services. If you’re providing AI tools for government, you’ll need to design for human-in-the-loop decision-making and transparent workflows.

6. TAKE IT DOWN Act: New Federal Law Targets Deepfake Exploitation

Description:
Signed into law in May 2025, the TAKE IT DOWN Act (S.146) makes it illegal to create or share non-consensual sexually explicit AI-generated content. It also mandates takedown processes across digital platforms.

Reference: Wikipedia Summary

What You Need to Know:
This is a milestone for platform responsibility. If your business operates a digital service or content-sharing platform, you’ll need policies in place for prompt removal of flagged AI-generated content.

Bottom Line: Navigating the New AI Policy Terrain

LevelLegislation/PolicyFocus AreaFederal10-Year Moratorium BillPreemption of state regulationTAKE IT DOWN ActDeepfake sexual exploitation banState (NY)RAISE ActFrontier model safety and risk controlState (CA)Governance ReportAudit, whistleblower, transparencyState (NJ)Deepfake CriminalizationMisuse of AI-generated deceptive mediaState (MT)HB 178 Government AI BanTransparency and limits on gov AI use

Final Thoughts from ETA

At the Enterprise Technology Association, we believe responsible AI policy can empower innovation — not stifle it. But businesses need clarity and foresight.

Now’s the time to:

  • Monitor local and state laws that could impact your operations

  • Audit your AI tools for transparency and risk exposure

  • Advocate for smart policy that balances innovation with trust

Want help staying ahead? Join ETA to receive our quarterly AI Policy Tracker and collaborate with the leaders shaping tomorrow’s tech governance.

Join us at joineta.org

Previous
Previous

Columbus AI Pulse: What's New in AI, Summer 2025

Next
Next

Introducing Enterprise Technology Advisors: Helping Leaders Navigate What’s Next