Artificial intelligence (AI), machine learning (ML), and the algorithms that often support artificial intelligence have generated policymaker interest. We acknowledge that as technological advances emerge, policymakers’ understanding of how these technologies work is vital for responsible policymaking. Our member companies are committed to responsible AI development and use. TechNet will advocate for a federal AI framework that brings uniformity to all Americans regardless of where they live, encourages innovation, and ensures that consumers are protected. TechNet therefore supports the following principles:
- Comprehensive, interoperable data privacy laws should precede AI regulations.
- Avoid blanket prohibitions on artificial intelligence, machine learning, or other forms of automated decision-making. Reserve any restrictions only for specific, identified use-cases that present a clearly demonstrated risk of unacceptable harm to a user or a clearly articulated and demonstrable national security threat, and narrowly tailor those requirements to the harms identified.
- Encourage the creation of AI task forces with robust industry participation, which establish a line of communication, provide industry expertise on AI, and allow for the development of consensus frameworks on reasonable AI regulations. Ensure small businesses have opportunities to participate on AI task forces to ensure SMBs can access the benefits of AI technology.
- Ensure any requirements are clearly allocated to specific roles in the artificial intelligence value chain. Recognize the different roles and responsibilities of participants within the AI value chain, including their technical limitations, and regulate them distinctly as appropriate.
- Do not force participants within the AI value chain to share publicly information that is proprietary or protected, and do not require an AI registry.
- Avoid new policies or regulations that are duplicative or conflict with existing laws and regulations.
- Leverage existing authorities under state law that already provide substantive legal, anti-discrimination, and civil rights protections, and limit new authorities specific to the operation of artificial intelligence, machine learning, and similar technologies where existing authorities are demonstrably inadequate. Allow measures taken to comply with one law or regulation to satisfy the requirements from another applicable law or regulation if such measures and requirements are reasonably similar in scope and effect.
- Ensure any requirements on automated decision tools focus on high-risk uses where decisions are based solely on automated decisions. Avoid labeling entire sectors as inherently high risk, and focus on specific outcomes that involve the loss of life or liberty or have significant legal effects on people.
- Do not impose opt-out requirements that conflict with practical realities or functionality that serves consumers’ interests.
- Regulation should encourage clear disclosure of AI systems where a reasonable person might otherwise believe they are interacting with another person — e.g., use of simulated personas like chatbots should be clearly identified.
- Offer technically correct, feasible, and nuanced definitions for key terms and roles within the AI value chain, in accordance with existing state and federal legislation and emerging industry standards. Policymakers should avoid overly broad definitions, such as those that use “including but not limited to,” and instead adopt context-specific definitions that accurately address the diverse characteristics of AI applications, ensuring clarity on what is affected without encompassing common technologies or processes.
- Limit enforcement to relevant state agencies and avoid private rights of action. Ensure any enforcement actions limit damage awards to clearly cognizable forms of actual demonstrated harms directly resulting from violations of the law. To promote balanced enforcement, legislation should include a right to cure, rebuttable presumptions, and allow for affirmative defenses, giving businesses a reasonable opportunity to correct potential violations.
- Provide safe harbors for companies that test and mitigate any bias or issues found in AI systems, as well as a reasonable right to cure period upon notice.
- Ensure data, including sensitive data with sufficient cybersecurity and privacy protections, can be used to conduct internal testing and model training to ensure algorithms work inclusively and as intended.
- Avoid a one-size-fits-all policy approach and support a risk-based framework that ensures that comparable AI use cases are subject to consistent oversight and regulation across sectors.
- Rely on self-certification mechanisms wherever possible, and avoid mandating external or third-party audits of impact assessments or risk assessments. Rather, identify the audit or assessment requirements and goals, allowing companies to determine if they are capable of conducting the audit or must seek third-party support.
- Rely on established national and international standards and frameworks, including the NIST AI Risk Management Framework and ISO standards, to guide policy discussions to ensure interoperability and avoid a patchwork of inconsistent regulations.
- Avoid holding developers, deployers, or distributors of AI and automated decisionmaking technologies liable for any unknown or unforeseen circumstances that may arise from the use and deployment of their technologies.
- Ensure any requirements on content provenance allow for flexibility of provenance techniques across various modalities (image, audio, video) and take into account the risk of the model for misuse.
Deepfakes
- Legislation should protect against intermediary liability when regulating the creation and use of deepfakes.
- Any disclosure or disclaimer requirements for electoral materials containing a deepfake should be placed on the candidate or sponsor of the material.
- Legislation should not disrupt the use of innovative technologies to detect deepfakes in order to protect against cybersecurity attacks and fraudulent activities.
- TechNet members are active in developing protections against the creation, storage, and distribution of child sexual abuse materials (CSAM) already, and members collaborate with one another to better fight CSAM. CSAM is contraband in all contexts, and states should ensure that criminal liability is placed squarely on the criminal actors who create and disseminate such material. The actions taken by companies, including collaboration with law enforcement, to prevent, detect, protect against, report, or respond to the production, generation, incorporation, or synthesization of such material should be protected.