States Grapple with Defining AI “Developers”

Key highlights this week:



  • We’re currently tracking 678 bills in 45 states related to AI this year, 76 of which have been enacted into law.

  • Delaware becomes the latest state to form a task force to study AI and recommend policy changes. 

  • Lawmakers in New Jersey enacted new tax credits with the hope of attracting AI investment. 

  • And New Hampshire’s governor signed two AI bills into law criminalizing nonconsensual sexual deepfakes and restricting how state agencies can use AI. 

As new laws regulating AI at the state level continue to progress, one sticking point that has emerged is how best to define a “developer” of an AI system. AI systems are not exactly static programs, they learn from additional training over time and one advantage is that you can train an AI model on the data specific to your organization — so-called “fine-tuning” the model. But how much additional training or modification of the model would cross the line from a user of a model to a model developer yourself? The handful of bills attempting to regulate the development of AI models can shed some light on this debate. 

Let’s start with some AI development basics to set the stage for this discussion. The big frontier AI model companies (i.e., OpenAI, Google, Anthropic, and Meta) use astronomical amounts of compute to train the models we use today (i.e., ChatGTP, Gemini, Claude, and Llama). That initial process is called “pre-training” and it can take many months and billions of dollars to complete. The result is a model with general-purpose knowledge. Which is great. But you can also take that general-purpose model and specialize it for specific tasks or domains. To do this, organizations — often not the original developer of the model — can use smaller, specialized data sets to “fine-tune” the model for a specific purpose. This training typically requires much less compute and can be finished in hours if not days. From a public policy perspective, the key question is how much “fine-tuning” or changes to the original model would one need to make before the modifier is now responsible for potential damage the use of that modified model causes? 

Nearly all of the 700 or so AI bills introduced this year aim to regulate the use of an AI model. But there are a handful of bills that would put safeguards in place to ensure that actual developers of those models do not release AI systems that could cause catastrophic harm. These are the proposals that need to seriously consider when post-release fine-tuning could fall under the “developer” definition of regulation. 

A good place to start is Colorado’s landmark AI law. While Colorado’s law is not scheduled to go into effect until February 2026, it’s the only bill aimed at AI model developers that has passed this finish line so far. Under the law Colorado (CO SB 205), “developer” is defined as “a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.” The key phrase in that definition for our purposes is the “intentionally and substantially modifies” part, which has its own extensive definition under the law, which means “a deliberate change made to an [AI] system that results in any new reasonably foreseeable risk of algorithmic discrimination.” 

Importantly, the Colorado law also spells out what “intentional and substantial modification” does not mean, which includes situations where the AI system “continues to learn” after deployment or made available to a deployer or when the change was “predetermined by the deployer” in an initial impact assessment or technical documentation. So, if you make substantial changes to an AI model, but it’s not a “reasonably foreseeable” risk of causing bias and you provide documentation and an impact assessment, you should be in the clear from falling under the “developer” definition in Colorado. Most observers expect lawmakers to significantly amend the Colorado law before it goes into effect in 2026, so this definition could potentially change. 

While the Colorado law relies primarily on an output test for when modifications of an AI model put you under the “developer” definition, a key California proposal takes a computational threshold approach. Under California Senator Scott Wiener’s SB 1047, a “developer” is defined as “a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power, or by fine-tuning an existing covered model using sufficient quantity of computing power.” Fine-tuning an AI model is defined as adjusting the model weights of a trained model by exposing it to additional data. You can think of “model weights” as the key formulas that encapsulate everything the model learned from its months-long training. And a “sufficient quantity of computing power” is spelled out in the statute (“equal to or greater than three times 10^25 integer or floating-point operations”) but the newly established Frontier Model Division will have authority to adjust that threshold over time. 

The California proposal further specifies the modification situation by defining a “derivative” model separately from the “covered model” that is the bill’s primary target for safety rules. Generally, a derivative model is an AI model derived from another AI model. And the bill’s definition of “covered model derivative” captures post-training modifications unrelated to fine-tuning, particularly fine-tuning falling under the computational level that would trigger the “developer” definition above (3x10^25 flops). Essentially, if you’re not conducting a massive level of fine-tuning, then it’s the original developer’s problem to comply with the legislation’s safety requirements. 

A previous version of the bill set a percentage amount (25% of compute compared to the original model’s pre-training compute) one would need to exceed to make the model the responsibility of the entity fine-tuning the model. But the latest version of the bill backs off this percentage of compute framework for derivative models in favor of the compute threshold and clarifies that modifications that fall outside of the definition of “fine-tuning” do not count towards this threshold. 

It’s a lot of math for state legislation. But it’s clear that there’s an attempt to find a balance between excluding users that make some straightforward adjustments to frontier models for their own purposes (e.g., training it on their company’s data) while including modifications of these powerful models that could enhance the potential hazards that these AI safety bills are trying to avoid in the first place. The mathematical formulas and legislative adjustments indicate that this is a hard balance to get right, and the fact that this technology is progressing at an exponential rate makes this an even more challenging job for policymakers — and is why lobbyists will need to keep a close eye on these esoteric definitions to avoid their own organization's uses of AI from falling within the new regulatory requirements. 

Recent Developments

In the News

  • Llama 3.1: On Tuesday, Meta released the newest version of its open-source AI models, Llama 3.1. Its largest version hopes to rival the frontier models of the major AI developers and has 405 billion parameters, which are the elements users can adjust (Meta has also released smaller versions with 70 billion and 8 billion parameters).

Major Policy Action 

  • Texas: We’re hearing that Rep. Capriglione (R) plans to unveil comprehensive AI legislation next month as lawmakers in the Lone Star State prepare for their biannual legislative session in early 2025. Rep. Capriglione, who serves on the NCSL AI Task Force with the authors of major AI bills in Colorado and Connecticut, could use those bills as a starting point, albeit with a “lighter regulatory touch” than blue states. 

  • Delaware: Last Wednesday, Gov. Carney (D) signed an AI study bill (DE HB 333) into law. The new law creates the Delaware Artificial Intelligence Commission tasked with making recommendations to lawmakers on AI utilization and safety and requires an inventory of all current state uses of generative AI technology and identify high-risk areas for the implementation of AI.

  • New Hampshire: On Monday, Gov. Sununu (R) signed bills addressing sexual deepfakes (NH HB 1319) and state agency’s use of AI (NH HB 1688) into law. The first new law amends the crime of nonconsensual dissemination of private sexual images to include certain synthetic sexual images. The second bill restricts the ways that state agencies can use AI. 

  • Wisconsin: On Wednesday, the Legislative Council Study Committee on the Regulation of Artificial Intelligence held its first meeting, focusing on the impact on the state workforce. The group will meet again in August to discuss deepfakes and the role of AI in misinformation.

  • New Jersey: On Thursday, Gov. Murphy (D) signed a bill (NJ SB 3432) to create the Next New Jersey Program to attract new investment to the state in the artificial intelligence industry through tax credits. Businesses with agreements with the New Jersey Economic Development Authority will be eligible for a tax credit equalling the lesser of the product of 0.1% of the eligible business’s total capital investment multiplied by the number of new full-time jobs; 25% of the eligible business’s total capital investment; or $250 million.

Previous
Previous

Illinois Governor Signs Four AI Bills into Law

Next
Next

Dozens of AI Laws Go Into Effect