Artificial intelligence (AI), machine learning (ML), and the algorithms that often support these technologies have generated significant interest among policymakers. As technological advances emerge, policymakers’ understanding of how these technologies work is vital for responsible policymaking. Our member companies are committed to responsible AI development and use. This means prioritizing safety and transparency while ensuring that innovation can continue to thrive. To achieve these goals, TechNet will advocate for a federal AI framework that brings uniformity to all Americans regardless of where they live, encourages innovation, and ensures reasonable consumer protections. TechNet therefore supports the following principles:

  • Comprehensive, interoperable data privacy laws should precede AI regulations.
  • Do not impose blanket prohibitions on artificial intelligence, machine learning, or other forms of automated decision-making. Do not label entire sectors as inherently high risk, and ensure any proposed restrictions target specific, harmful outcomes that involve national security or the loss of life or liberty.
  • Reserve any requirements on automated decision tools for high-risk uses where adverse decisions are based solely on automated decisions. Low risk automated decision tools should be clearly exempted.
  • Encourage the creation of AI task forces with robust industry participation, which establish a line of communication, provide industry expertise on AI, and allow for the development of consensus frameworks on reasonable AI regulations. Ensure small businesses have opportunities to participate on AI task forces to ensure SMBs can access the benefits of AI technology.
  • Ensure any requirements are clearly allocated to specific roles in the artificial intelligence value chain. Recognize the different roles and responsibilities of participants within the AI value chain, including their technical limitations, and regulate them distinctly as appropriate.
  • Do not force participants within the AI value chain to share publicly information that is proprietary or protected, and do not require an AI registry. Any legislation should explicitly protect trade secrets.
  • Avoid new policies or regulations that duplicate or conflict with existing laws and regulations.
  • Ensure that compliance obligations are proportionate to consumer risk and provide flexibility to support innovation.
  • Leverage existing authorities under state law that already provide substantive legal, anti-discrimination, and civil rights protections, and limit new authorities specific to the operation of artificial intelligence, machine learning, and similar technologies where existing authorities are demonstrably inadequate. Allow measures taken to comply with one law or regulation to satisfy the requirements from another applicable law or regulation if such measures and requirements are reasonably similar in scope and effect.
  • Do not impose consumer opt-out requirements or broad appeal rights that conflict with practical realities or functionality that serves consumers’ interests.
  • Offer technically correct, feasible, and nuanced definitions for key terms and roles within the AI value chain, in accordance with existing state and federal legislation and emerging industry standards. Policymakers should avoid overly broad definitions, such as those that use “including but not limited to,” and instead adopt context-specific definitions that accurately address the diverse characteristics of AI applications, ensuring clarity on what is affected without encompassing common technologies or processes.
  • Limit enforcement to relevant state agencies and avoid private rights of action. Ensure any enforcement actions limit damage awards to clearly cognizable forms of actual demonstrated harms directly resulting from violations of the law. To promote balanced enforcement, legislation should include a right to cure, rebuttable presumptions, and allow for affirmative defenses, giving businesses a reasonable opportunity to correct potential violations.
  • Provide safe harbors for companies that demonstrate adherence to robust internal evaluation and safety validation systems, particularly where such systems are designed to meet or exceed existing, sector-specific regulations, as well as for companies that publish and comply with clear testing and mitigation policies.
  • Ensure data, including sensitive data with sufficient cybersecurity and privacy protections, can be used to conduct internal testing and model training to ensure they work inclusively and as intended.
  • Avoid a one-size-fits-all policy approach and support a risk-based framework that ensures that comparable AI use cases are subject to consistent oversight and regulation across sectors.
  • Rely on self-certification mechanisms wherever possible, and avoid mandating external or third-party audits of impact assessments or risk assessments. Rather, identify the assessment requirements and goals, allowing companies to either leverage their existing, sector-specific evaluation or validation processes to meet those goals or otherwise determine if they must seek third-party support.
  • Rely on established national and international standards and frameworks, including the NIST AI Risk Management Framework and ISO standards, to guide policy discussions to ensure interoperability and avoid a patchwork of inconsistent regulations.
  • Avoid holding developers or deployers of AI and automated decisionmaking technologies liable for any unknown, unintended, or unforeseen circumstances or subsequent modifications that may arise from the use of their technologies.
  • Ensure any requirements on content provenance are technically feasible, allow for flexibility of provenance techniques across various modalities (image, audio, video), and provide flexibility to account for integrity and safety-related use cases.
  • Chatbot disclosure legislation should be risk-based, focusing on potential harms tied to a chatbot’s function and impact. High risk uses may warrant user notification to ensure transparency, while low risk uses should not carry the same requirements. Any disclosure requirements should focus on what a reasonable person would understand when interacting with a chatbot, and any disclosure obligations should be specifically assigned to the appropriate entity in the AI value chain.

Deepfakes

  • The term “synthetic media” refers to audio, video, or image content that has been altered or wholly manufactured using AI. When synthetic media is manipulated in order to falsely appear to a person to be authentic or truthful, it is often referred to using the terms “manipulated media” or “deepfakes.”
  • Legislation should protect against upstream and intermediary liability when regulating the creation and use of deepfakes. Legislation should also recognize the protections and safeguards placed by developers within general-purpose AI systems to prohibit and prevent the misuse of the systems’ content-generating capabilities.
  • With respect to deepfakes intended to influence elections, we believe that any disclosure or disclaimer requirements for electoral materials containing a deepfake should be placed on the candidate or sponsor of the material. Creators should ultimately be responsible for their content, and this is especially true in a sensitive and highly context-dependent space like political speech.
  • Legislation should not disrupt the use of innovative technologies to detect deepfakes in order to protect against cybersecurity attacks, identify theft, and other fraudulent activities.
  • TechNet members are already active in developing protections against the creation, storage, and distribution of child sexual abuse materials (CSAM), and members collaborate with one another to better fight CSAM. CSAM is contraband in all contexts, and states should ensure that criminal liability is placed squarely on the criminal actors who create and disseminate such material. The actions currently taken by companies to prevent, detect, protect against, report, or respond to the production, generation, incorporation, or synthesization of such material should be protected.
  • Non-Consensual Intimate Image (NCII) abuse causes serious harm to victims’ safety, privacy, and well-being. TechNet supports strong, effective measures to combat NCII, and our member companies have implemented tools and policies to do so. States have a role in imposing criminal and civil penalties for individuals who create and share NCII. However, we would urge states to avoid unnecessarily duplicating the recently passed federal TAKE IT DOWN Act. Doing so would risk creating a conflicting patchwork that would impinge on critical efforts to provide redress for victims of NCII.

Digital Replicas

  • In recent years, the proliferation of digital replicas or artificially generated outputs (“synthetic digital imitations”) of an individual’s name, image, voice, and likeness has become a priority for lawmakers. These replicas are highly realistic representations that are readily identifiable as the voice or visual likeness of an individual.
  • Unlike deepfakes, they may be authorized or unauthorized and can be produced by any type of digital technology, not just AI. Due to this, and to avoid unduly restricting the authorized use of replicas, the bar for liability should be high.
  • When examining this issue, policymakers should work to ensure liability for these outputs falls on the end-user, who has actual knowledge the replica is unauthorized and distributes it, not the providers of generative artificial intelligence tools or intermediaries.
  • Attaching liability to model developers or operators of general purpose AI systems for the creation of authorized or unauthorized digital replicas alone will disincentivize developers from designing tools that benefit creative outputs for fear of being held liable as a co-creator of violative content. AI regulation should not penalize companies for merely providing tools that may be used for permissive and non-permissive uses.
  • Personal uses of digital replicas, which include those which may be stored on a personal device, should not be subject to regulation. From a practical perspective, monitoring and enforcing regulations on the private creation of digital replicas would present insurmountable practical challenges and significant privacy concerns. Focusing enforcement efforts on the act of public communication provides a clear and discernible point of intervention.
  • Any policy solution should take significant care to not to stifle speech, such as by ensuring users are able to create parody and newsworthy content. A digital replica can qualify as a permissive use when it serves a transformative, expressive purpose that outweighs the need to protect the individual.
  • Striking the balance between protecting the rights in one’s likeness and free expression is paramount, as overly broad restrictions on digital replicas could stifle innovation and limit the public’s access to valuable forms of speech.
  • TechNet encourages legislators not to advocate for notice-and-removal schemes nor the creation of a property right. This would create inconsistencies across states that will lead to operational challenges. A better approach is to penalize the end-user with actual knowledge they are both creating an unauthorized digital replica and publicly communicating it. When calculating damages, they should be akin to the harm. We recommend actual damages because unauthorized digital replicas are most often not intended to be harmful to the individual being imitated.

Other Policy Agendas

Privacy and Security

December 1, 2025

Read More

Artificial Intelligence

December 1, 2025

Read More

Education and Workforce Development

December 1, 2025

Read More