Artificial intelligence (AI) is a transformational technology that has the potential to revolutionize how we live and work and help us solve the most significant challenges of our time. AI can enhance productivity, democratize and expand access to important services, and improve product innovation. TechNet members represent many of the leading AI and automated systems developers, researchers, deployers, and users.
Leverage Existing Laws and Adopt a Risk-Based Approach for Effective AI Regulation
- As policymakers consider new regulations for AI, it is important to note there are already existing rules under sectoral regulation and laws that prohibit unlawful behavior, including such behavior perpetuated through the use of AI. For example, many existing civil rights laws apply to AI models used in education, healthcare, employment, housing, financial services, and accessing goods and services. Such laws and regulations, which benefit from existing well-developed regulatory and enforcement frameworks, focus on preventing and providing recourse against the prohibited conduct rather than the means by which the conduct was accomplished. In some cases, existing legislation already provides a way to more effectively regulate the safe use of AI.
- Any new laws or regulations, as well as guidance documents and enforcement statements, should focus on known or rationally anticipated harms that could be prevented or addressed by filling gaps in existing legal regimes. Notably, any new laws or regulations should be narrowly scoped to target identifiable gaps. Further, when considering new AI laws or regulations, policymakers should take into account the following:
- It is crucial for policymakers to recognize the diverse array of stakeholders involved in AI systems and across the AI value chain. Careful consideration must be given to defining and designating regulatory responsibility that aligns with the roles and interactions of these entities.
- The AI startup ecosystem is vital to maintaining America’s competitive edge in the global economy. Potential implications for small and mid-size businesses must be considered, especially in terms of ensuring their access to a diverse AI ecosystem.
- Avoid any “shutdown” requirements as these will disincentivize downstream innovation and create disparities in the global competition to develop AI capabilities.
- Any new regulations should be subject to existing Regulatory Impact Assessment analyses.
- Policymakers should adopt an incremental and collaborative approach to AI governance. To promote innovation and adapt to technological changes, we encourage the use of evidence-based regulatory tools like safe harbors, which allow the industry to test and share best practices.
- Laws should not impose broad opt-outs that conflict with practical realities of functionality that serves consumers’ interests, such as the ability of a website to provide search results.
- Ensure that the considerations and relevant requirements regarding the use of commercial AI systems by a federal/state/local government agency should be calibrated to the level of risk the intended use case poses, consistent with any new AI frameworks applicable to the private sector.
- We believe there should be a central coordinator of the federal government’s development, deployment, and use of AI systems that ensures that AI policy and regulations are consistent across agencies and industries. This coordinator should ensure that AI policies are risk-based and the rules regulations that agencies actors are subject to are based on the level of risk the AI use case entails and not on what regulatory body may claim authority over an entity. This coordinator should partner with existing subject matter agencies on particularly complex or technical use cases that may benefit from specialized expertise.
- The National Institutes of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) should be promoted as a voluntary model for AI lifecycle management, including design, development, deployment, and post-deployment.
- Private rights of action must be avoided because they can undermine innovation, subject small and large businesses to abusive and frivolous litigation tactics, and strain the judicial system.
- Leverage existing enforcement mechanisms and protections from intermediary liability to address AI enforcement challenges.
- A consistent risk-based approach is needed to provide clear guidance and to prevent a patchwork of differing state laws that could impede innovation and progress. A consistent and level playing field for all entities developing, deploying, and using AI is essential.
- Policymakers should prioritize global cooperation and coordination in AI regulations, and they should seek to avoid regulatory divergence when it could harm innovation, trade, and investment critical to U.S. AI leadership.
- Establish a national privacy standard to promote consistent regulation of Americans’ data. A comprehensive and preemptive federal privacy law that protects consumers and provides businesses certainty about their responsibility is an essential component of a coherent national AI-focused policy. A clear national framework will also help build trust in AI systems. TechNet’s principles on privacy can be found here.
Responsible AI Evaluations
- Any transparency, explainability, or audit requirements imposed on AI systems must account for protecting personal information, and carefully balance the proprietary and trade secret protections regarding the AI system and the technical feasibility of implementing such requirements. It must also not jeopardize the safety systems of AI-driven services.
- For example, disclosure of actual training data without appropriate safeguards risks disclosing customer and company confidential and proprietary information.
- Regulators do not need unfettered access to proprietary AI models to assess their safety. Any proposed AI audit requirements need to be reasonable, outcome-based, and focused on AI-based systems that are deployed in the market.
- Leading AI developers and academics are continuing to research and improve how to best explain the output of generative AI systems. We encourage the federal government to support continued research and development into best practices for explainability, transparency, and auditing and discourage “one-size-fits-all” regulations as this technology continues to evolve.
- Ensure any requirements on content provenance allow for flexibility of provenance techniques across various modalities (image, audio, video).
- Regulations requiring enhanced disclosures for users or regulators should apply only to high-risk applications that lack existing regulatory structure to govern situations where the AI system’s compromise, misuse, or destruction would be reasonably likely to result in loss of life, liberty, or significant legal effects.
- TechNet supports the ongoing work of the U.S. AI Safety Institute to develop science-based AI testing standards and foster international collaboration on AI safety, including efforts to harmonize global standards around AI testing and evaluations. We believe it is important for NIST to continue its longtime work of advancing measurement science and collaborating with private industry to develop responsible safety practices.
Transparency
- We urge policymakers to avoid one-size-fits-all transparency requirements on AI systems, as there will likely be differences between the transparency required between different actors across the AI value chain. When it comes to the transparency requirements between developers and deployers, it is essential that any such requirements establish a commitment that developers will share all relevant information that deployers would need to support their applicable regulatory compliance. Since users of AI will not have the same regulatory compliance responsibilities as deployers, any transparency requirements or audit reporting may reasonably differ and be limited only for high-risk uses of AI.
- Support public education efforts on how AI systems operate in order to help demystify AI.
- TechNet supports the disclosure of generative AI content to users in line with industry best practices. Industry leaders are still researching how to best indicate content has been AI-generated and when such indications are appropriate. We are supportive of this ongoing discussion and research to best inform the American public about the content they are viewing.
External Reviews
- TechNet believes it is premature to mandate independent third-party auditing of AI systems. Mandating an independent audit before appropriate technical standards and conformity assessment requirements are established could open AI systems to national security threats, trade secrets theft, and inaccurate audit reports.
- We believe AI auditing standards, ethics, or oversight rules must consider the use-case-specific auditing needs, calibrated to the risk of the specific use case, set to measurable benchmarks, and ensure safe and ethical practices to promote continued innovation while also protecting intellectual property, trade secrets, and security.
- Reciprocity of AI audit findings across local, state, and federal jurisdictions should also be accepted to limit resource burden and sustain market access for the AI startup ecosystem.
Mitigate Potential Bias
- Throughout its lifecycle, AI development must reflect our society’s highest ideals, and its performance must be appropriately monitored and evaluated. Appropriate measures to identify, track, and mitigate unintended bias and discrimination should be implemented.
- Different actors such as developers, deployers, and users of AI systems should implement oversight and accountability processes appropriate to their role in the AI value chain to ensure safety, fairness, and trustworthiness; protect against malicious activity; and address flawed data sets or assumptions.
- Existing anti-discrimination laws already apply to AI models in many important contexts, including housing, health, employment, and consumer financial services (i.e., the Fair Housing Act, Section 1557 of the Affordable Care Act, Title VII of the Civil Rights Act of 1964, and the Equal Credit Opportunity Act). Therefore, additional legislative and/or regulatory obligations in these areas at this time would be unnecessarily duplicative, create inconsistent or conflicting standards, and chill innovation in the United States. Instead, policymakers should leverage existing tools to address concerns of bias.
- TechNet members follow legal guidelines at all stages when developing, testing, and monitoring AI assessments, and in many cases, they test for group differences beyond those required by law.
- In cases where bias may result despite a party’s best efforts to mitigate, the party should be given a rebuttable presumption of reasonable care if they have complied with the relevant law.
- To support innovation and the development of new bias-detection techniques, legislation should exclude from scope: (1) AI systems and models specifically developed and put into service for the sole purpose of scientific research and development; and (2) scientific research and development activity on AI systems or models prior to being placed on the market or put into service.
Secure Advanced Systems
- Leverage security by design principles to enhance cybersecurity within AI systems at the start of their lifecycle.
- Empower America’s cyber defenders by funding the use of AI-enhanced cybersecurity services and tools within the federal government.
- Strengthen the adoption of AI cybersecurity awareness training to help minimize risk and prevent loss of intellectual property, data, and money.
- Support bidirectional information sharing and cyber threat programs accounting for threat actors leveraging AI.
- Avoid mandating backdoors or licensing keys for advanced AI chips.
Build the Infrastructure to Catalyze the Innovation Economy
- To secure America’s position as the global leader in AI, we recommend prioritizing and streamlining investments in AI infrastructure and supply chains, including through modernized energy grids, high-speed broadband, and advanced semiconductor manufacturing.
- Support public-private partnerships in establishing and maintaining upskilling and reskilling programs to help Americans best utilize and improve their productivity with automated tools.
- Some of these programs will be government-funded and designed, but many companies are already providing useful resources to help Americans advance their careers. Governments at all levels should seek to understand and build on what is already working.
- Promoting upskilling, investing in workforce programs, and encouraging registered apprenticeships offers a proactive approach to fostering diversity among AI developers, deployers, monitors, and users. This is a valuable strategy to address bias and workforce concerns throughout the AI lifecycle.
- Develop a skills taxonomy for AI, similar to cybersecurity, in order to encourage skills portability and creation of recognized industry certifications.
- Support government funding for AI safety research and infrastructure.
- Congress must authorize and fund the National AI Research Resource (NAIRR). The NAIRR is important to foster the development of the U.S. domestic AI research ecosystem and maintain U.S. leadership in AI on the global stage.
- Most of the world’s leading AI developers are outside of government institutions.; Governments need to engage these experts by utilizing public-private partnerships to inform the development of regulation and guidance, build modern government AI systems, and incorporate AI efficiencies into government services.
- Government agencies need dedicated funding sources for AI deployment and governance.
- Support the creation of a dual-intent science, technology, engineering, and math (STEM) visa for foreign students who have earned master’s level or higher degree from U.S. colleges and universities. This would promote economic growth and innovation in AI by ensuring that talented innovators educated and trained in the United States can become citizens and create jobs here.
- Support the federal government’s strategic hiring of AI experts and the filling of vacant technology roles. Bolstering our federal workforce with needed talent will allow key government agencies to enhance their capacity to monitor, utilize, and ensure responsible and impactful AI development and deployment.
- TechNet supports expanded government utilization of AI to improve access to important services, enhanced efficiency, cost savings, data-driven decision-making, and more equitable and inclusive service provision, ultimately benefiting citizens and society as a whole.
- TechNet supports the government in developing “AI Ready Data.” The United States federal government is one of the biggest producers of data in the world, and these important datasets are already fueling innovation in the public and private sectors. As we move to greater deployment of AI systems, ensuring this data is well-organized will allow these modern tools to deliver faster, cost-effective, and more accurate insights.