Artificial Intelligence Commission 2023 - Full Report

Artificial Intelligence Commission 2023 - Executive Summary

Published

March 09, 2023

Share

The use of artificial intelligence (AI) is expanding rapidly. These technological breakthroughs present both opportunity and potential peril. AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers. With so much potential for innovation, organizations investing in AI-oriented practices are already ramping up initiatives that boost productivity to remain competitive.

Like most disruptive technologies, these investments can both create and displace jobs. If appropriate and reasonable protections are not put in place, AI could adversely affect privacy and personal liberties or promote bias. Policymakers must debate and resolve the questions emanating from these opportunities and concerns to ensure that AI is used responsibly and ethically.

This debate must answer several core questions: What is the government’s role in promoting the kinds of innovation that allow for learning and adaptation while leveraging core strengths of the American economy in innovation and product development? How might policymakers balance competing interests associated with AI—those of economic, societal, and quality-of-life improvements—against privacy concerns, workforce disruption, and built-in-biases associated with algorithmic decision-making? And how can Washington establish a policy and regulatory environment that will help ensure continued U.S. global AI leadership while navigating its own course between increasing regulations from Europe and competition from China’s broad-based adoption of AI?


The United States faces stiff competition from China in AI development. This competition is so fierce that it is unclear which nation will emerge as the global leader, raising significant security concerns for the United States and its allies. Another critical factor that will affect the path forward in the development of AI policy making is how nations historically consider important values, such as personal liberty, free speech, and privacy.

To maintain its competitive advantage, the United States, and like-minded jurisdictions, such as the European Union, need to reach agreement to resolve key legal challenges that currently impede industry growth. At this time, it is unclear if these important allies will collaborate on establishing a common set of rules to address these legal issues or if a more competitive—and potentially damaging—legal environment will emerge internationally.

AI has the capacity to transform our economy, how individuals live and work, and how nations interact with each other. Managing the potential negative impacts of this transition should be at the center of public policy. There is a growing sense that we have a short window of opportunity to address key risks while maximizing the enormous potential benefits of AI.

The time to address these issues is now.

In 2022, the U.S. Chamber of Commerce formed the Commission on AI Competitiveness, Inclusion, and Innovation (“Commission”) to answer the questions central to this debate. The Commission, cochaired by former representatives John Delaney (D-MD) and Mike Ferguson (R-NJ), was tasked with the mission to provide independent, bipartisan recommendations to aid policymakers. Commissioners met over the course of a year with over 87 expert witnesses during five separate field hearings across the country and overseas, while also receiving written feedback from stakeholders answering three separate requests for information posed by the Commission.


The Commission observed six major themes from its fact finding:

Key takeaways

  • The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
  • Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
  • A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
  • The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
  • The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
  • Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.

Understanding the importance of these findings, the Commission also determined that the following five pillars should be at the core of AI regulatory policy making:

Five pillars of AI regulation

Efficiency

Policymakers must evaluate the applicability of existing laws and regulations. Appropriate enforcement of existing laws and regulations provides regulatory certainty and guidance to stakeholders and would help inform policymakers in developing future laws and regulations. Moreover, lawmakers should focus on filling gaps in existing regulations to accommodate new challenges created by AI usage.

Collegiality

Federal interagency collaboration is vital to developing cohesive regulation of AI across the government. AI use is cross-cutting, complex, and rapidly changing and will require a strategic and coordinated approach among agencies. Therefore, the government will need to draw on expertise from the different agencies, thus allowing sector and agency experts the ability to narrow in on the most important emerging issues in their respective areas.

Neutrality

Laws should be technology neutral and focus on applications and outcomes of AI, not the technologies themselves. Laws regarding AI should be created only as necessary to fill gaps in existing law, protect citizens’ rights, and foster public trust. Rather than trying to develop a onesize-fits-all regulatory framework, this approach to AI regulation allows for the development of flexible, industry-specific guidance and best practices.

Flexibility

Laws and regulations should encourage private sector approaches to risk assessment and innovation. Policymakers should encourage soft law and best practice approaches developed collaboratively by the private sector, technical experts, civil society, and the government. Such non-binding, self-regulatory approaches provide the flexibility of keeping up with rapidly changing technology as opposed to laws that risk becoming outdated quickly.

Proportionality

When policymakers determine that existing laws have gaps, they should attempt to adopt a risk-based approach to AI regulation. This model ensures a balanced and proportionate approach to creating an overall regulatory framework for AI.


Recommendations

Having understood the urgency to develop policies to promote responsible AI and to ensure economic and workforce growth, the Commission used these pillars to develop policy recommendations to put these priorities into action. The Commission recommends areas that policymakers must address, including preparing the workforce through education, bolstering global competitiveness in the areas of intellectual property while shoring up partnerships, and protecting national security.

Preparing the Workforce

  • Use an Evidence-Based Approach. Policymakers must take action to understand the potential impact of AI on the American workforce by leveraging new data sources and advanced analytics to understand the evolving impact of AI and machine learning on the American public.
  • Educate the Future Workforce. The United States must increase education around AI in both the K-12 and higher education systems by encouraging policymakers to reform the standard curriculum to better prepare students for developing AI and machine learning systems.
  • Train and Reskill. The public and private sectors must invest in training and reskilling the future workforce. These investments should be targeted toward programs that help ease worker transitions and improve incentives for businesses to invest in retraining. Policymakers should also leverage community colleges and vocational schools to train workers to perform jobs alongside AI-enabled systems.
  • Attract High-Skilled Talent. In areas where a worker shortage cannot be addressed through education, training, and reskilling, Congress must act to increase the AI talent pool through targeted refinements to the H-1B visa process to encourage highskilled immigration to the United States.

Bolstering global competitiveness

  • Shore Up Global Partnerships. U.S. officials must collaborate with key partners and allies to develop more sensible global governance frameworks that advance our common democratic goals and values.
  • Advance Intellectual Property Protections. Building on the foundation of the current system, policymakers must clarify intellectual property law requirements to ensure adequate protection of AI-enabled intellectual property. Before any change, policymakers must involve relevant stakeholders to consider potential unintended effects.
  • Provide Necessary Resources. Policymakers should provide additional resources to the U.S. Patent and Trademark Office to support the acquisition of technical expertise, training, and other resources to speed the review of AI- and machine learning– related public patent applications.
  • Protect Ingenuity. Policymakers should also explore opportunities to grant provisional approvals for submissions under review where appropriate to mitigate the effects of lengthy delays.

Protecting national security

  • Human Rights. The United States must drive the development and implementation of laws and codes of conduct focused on promoting human rights and innovation.
  • Establish International Rules of Conduct. As the United States leads in the development of AI-enabled weapons, it should follow and encourage other countries to align with existing international norms and laws.
  • Systems Validation. The U.S. should invest heavily in new ways of testing, evaluating, verifying, and validating (“TEVV”) military AI and machine learning systems to ensure that they are used safely.
  • Streamline Procurement. To capitalize on American ingenuity, Congress and the Pentagon must look at streamlining acquisition processes and finding new ways of incorporating industry expertise and experience within the military enterprise.
  • Work with Allies. The United States should look to open investment opportunities for AI-enabled systems to like-minded countries and allies and vice versa.

These findings and recommendations are not exhaustive, and we welcome the insights of others who may contribute to the AI policy debate. The Commission and individual Commissioners stand ready to collaborate with policymakers to address these issues that are of utmost importance to the United States and the economic wellbeing and safety of the global community.

Artificial Intelligence Commission Report Social Toolkit

Join us in driving awareness for the potential of AI systems and the need for smart regulation, investment, and modern policies. Click the link below to share sample social media posts and visuals.

Artificial Intelligence Commission 2023 - Full Report

Artificial Intelligence Commission 2023 - Executive Summary