Evangelos Razis
Former Director, Center for Global Regulatory Cooperation

Published

June 18, 2020

Share

Governments around the world are increasingly turning their attention to artificial intelligence. Earlier this year, the White House issued a draft memorandum on Guidance for Regulation of AI Applications, which outlined an approach that the U.S. Chamber strongly endorses. Across the Atlantic, the new European Commission followed by publishing an AI white paper, which proposes a regime for regulating AI deemed as “high risk,” and invited stakeholders to comment.

Our response to the European Commission, which may be found here, emphasizes the following points:

A Stable Regulatory Environment & Continued Investment are Essential

AI is an evolving and diverse suite of technologies that continues to scale and innovate. A stable regulatory environment and continued investment from governments and the business community are therefore essential. As the Commission looks to promote Europe’s post-pandemic economic recovery and boost its competitiveness, it should prioritize AI investments and stronger incentives for data-driven innovation, rather than establishing an ambitious framework for “high risk” AI applications.

Keep Europe’s Economy Open

Europe’s efforts to advance the use and development of AI must not shut it off from the rest of the world. The Commission’s goal of accelerating Europe’s digital transformation, building the digital skills of its workforce, and preparing its industrial base for a data-driven future is an important one. At the same time, rhetoric around “technological sovereignty” is concerning. The Commission should explicitly disavow approaches to AI governance that may inhibit market access or disadvantage non-European providers of AI technologies and applications. We echoed these points in our recent comments to the Commission’s European Data Strategy.

Gather More Evidence

The Commission’s whitepaper asserts that, “…lack of trust is a main factor holding back a broader uptake of AI” and that a “…clear European regulatory framework would build trust among consumers and businesses in AI.” It fails, however, to provide sufficient evidence for these observations. Consumers interface with businesses using AI at scale every day, suggesting that they’re more comfortable with AI than even they realize. It is also unclear how a gap in trust may be addressed through new regulatory action, as opposed to the many multi-stakeholder initiatives that have formed around the world – including by the Commission – to promote responsible uses of AI.

Review Existing Laws & Regulations

Many of Europe’s existing laws and regulations already apply to AI, including in financial services, healthcare, transportation, safety and security, and data protection. Before moving forward with new rules, the Commission should undertake a thorough and comprehensive review of all relevant existing EU and Member State laws and regulations. Failure to appropriately account for these rules before instituting a new framework may lead to overlapping and contradictory obligations that will reduce Europe’s economic competitiveness.

An Improved Risk-Based Approach

The Commission’s risk-based approach to AI should be proportional, incorporate factors such as the probability and scale of potential harm, and account for the significant social, safety, and economic benefits that may accrue when an AI replaces a human action. Categorizing entire sectors and applications as “high-risk,” as the Commission does in its proposal, is an insufficiently nuanced approach because an AI’s risk profile necessarily varies from case to case and from business to business. Moreover, any future framework should integrate the EU’s existing regulatory structures as much as possible. Regulators — whether in financial services, healthcare, transportation, data protection, or safety and security — are best placed to interpret and apply a risk framework to their specific contexts.

Avoid Burdensome Requirements

The Commission’s proposal to subject AI designated as “high risk” to a new conformity assessment regime may serve as a significant bottleneck on the development of AI in Europe, as companies would need to win approval from regulators before deploying AI-enabled goods and services on the market. Many innovative small and medium-sized enterprises that may have neither the time nor resources to undergo such a process will either avoid investing in perceived “high risk” areas or deploy their solutions abroad. It may also raise significant trade and intellectual property concerns, as companies will be reticent to allow an outside organization or government agency to inspect an algorithm and datasets used in an AI’s development.

International Cooperation

Cooperation between the EU, the U.S., and likeminded countries such as Japan is necessary to advance interoperability between emerging AI governance frameworks. It is also needed to face the common challenge posed by non-market economies that exploit illegal state subsidies, rely on forced technology transfers, and undermine fundamental human rights. The Commission has played an important role, alongside the U.S. and Japan, in developing the OECD’s Recommendations on Artificial Intelligence and in their subsequent endorsement by the G20. We urge the Commission to continue to engage the U.S. business community and other international partners on this vital issue.

To learn more about the U.S. Chamber’s position on artificial intelligence, review our policy principles.

About the authors

Evangelos Razis

Evangelos Razis is former Director at the U.S. Chamber of Commerce’s Center for Global Regulatory Cooperation.