Artificial intelligence (AI) and machine learning (ML) are transformational technologies that revolutionize how we live and work and help us solve the greatest challenges of our time. AI and ML can enhance productivity, democratize and expand access to important services, and enhance product innovation. AI and ML are being used to defend our country against cyberattacks, detect and deter fraud, deliver high-quality health care solutions, assist persons with disabilities, help individuals make better financial decisions, and train workers, among other applications. AI and ML development should be embraced because its potential for improving our lives is almost limitless.
However, AI innovation must be developed and implemented responsibly. In doing so, key issues must be considered: privacy, transparency, data veracity, security, and workforce. We are concerned, however, about a patchwork of good faith but ill-advised state and city regulations that could effectively limit the use of AI and reduce the opportunity for those in their respective jurisdictions to enjoy the benefits of AI.
Responsible and Ethical AI and ML:
- Govern: AI and ML must be created within a framework that is anchored to our shared core values, ethical guardrails, and regulatory constraints, in addition to an organization’s operating principles.
- Provide clear definitions for key terms like “artificial intelligence”, “machine learning,” “automated decisioning”, “artificial intelligence techniques,” and “algorithms” that avoid overly broad designations that lead to uncertainty of who and what is affected.
- Avoid blanket prohibitions on AI, ML, or other forms of automated decision-making. Any restrictions on automated decisions should focus on high-risk uses and those decisions based solely on automated decisions.
- Utilize the National Artificial Intelligence Research Resource (NAIRR) within the White House Office of Science and Technology Policy and the National Science Foundation to partner with industry and stakeholders to lower the barrier to entry for AI research and help spur greater economic prosperity.
- Support voluntary government guidelines which establish consensus standards and outline a comprehensive approach to AI through ensuring public engagement, limit regulatory scope, and promote trustworthiness within technology. Monitor the development and implementation of the NIST Artificial Intelligence Risk Management Framework, which is intended to be a voluntary tool to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
- Contemplate “ethical risk frameworks” to articulate voluntary ethical standards and guardrails.
- TechNet supports stronger regulations that provide companies with the ability to safeguard the integrity of their systems.
- Federal regulations should not force companies to provide proprietary or protected information. Enforcement should be limited to the relevant agencies and avoid private rights of action.
- As the United States develops artificial intelligence policy, TechNet encourages policymakers to be mindful of global AI regulations.
- The development of effective and responsible AI requires large, diverse datasets. TechNet supports a responsible federal privacy policy that brings uniformity to Americans regardless of where they live. Federal policies need to ensure sensitive data can be used to conduct self-testing to ensure algorithms work inclusively and as intended. TechNet’s principles on privacy can be found here.
- Policymakers should work with all stakeholders, including small businesses, when considering AI regulation to ensure proposals do not inadvertently consolidate the benefits of AI to the largest companies.
- Design: AI should adhere to “responsible AI by design” architecture and be deployed with trust and other safeguards built into the design. This means making reasonable efforts to incorporate representative and quality data with respect to privacy, transparency, security, and, as appropriate, interpretability.
- Utilize risk-based impact assessments for AI and ML initiatives that assess risks and benefits of each model, as applied.
- Operationalize “ethical risk frameworks” in the design phase to implement appropriate oversight and governance.
- During the design phase, account for diverse backgrounds, expertise, and lived experiences.
- Monitor: Throughout its lifecycle, AI must reflect human values and ensure its performance is appropriately monitored and evaluated. Measures to prevent unintended bias and discrimination should be implemented. Owners of AI systems should ensure appropriate oversight and accountability to enable humans to assess the need for improvements to ensure safety, fairness, and trustworthiness; protect against malicious activity; and address flawed data sets or assumptions. However, it is worth underscoring that existing anti-discrimination laws apply to AI models in many important contexts, including housing, employment, and consumer financial services (i.e., the Fair Housing Act, Title VII of the Civil Rights Act of 1964, and the Equal Credit Opportunity Act). Therefore, additional oversight in these areas would be unnecessarily duplicative and may create inconsistent or conflicting standards. Instead, policymakers should leverage the tools already in existence to address concerns of bias.
- Train: Support active upskilling through education and training and human-computer symbiosis, helping move employees from executing rote tasks to providing analysis that requires judgment, ingenuity, and real-world understanding.
- Encourage the promotion and growth of training and workforce development
- Encourage the promotion and growth of training and workforce development to prepare employees for roles requiring human-AI collaboration. This includes using techniques and practices to identify skills gaps and promote the learning needed to succeed in AI.
- Workforce development should take into consideration the potential need for retraining to help current and future workforces.
- To preserve our national security, the Department of State and Department of Commerce should consider how to best implement export controls for military and dual-use AI, respectively. In most cases, AI should be considered an aspect of the defense article itself. To the extent the algorithms themselves warrant control to protect national security, the limits should be narrowly defined.