Evangelos Razis
Former Director, Center for Global Regulatory Cooperation

Published

June 02, 2021

Share

[This is the fifth article in a series on policy priorities for transatlantic relations. Read articles one, two, three, and four.]

In April, the European Union took its first steps toward building a new comprehensive framework for regulating artificial intelligence (AI). Drafted by the European Commission, the Artificial Intelligence Act bans certain AI practices outright and mandates that AI applications deemed as “high risk” meet strict data governance and risk management requirements.

The bill may be an inflection point in Europe’s digital future.

In the United States, policymakers are rightly focused on boosting America’s competitiveness by supporting development and use of AI. In proposing the AI Act, European leaders seem to believe that their capacity and willingness to regulate is a competitive advantage over more innovative economies.

This is a high-stakes gamble.

Certainly, American and European businesses developing emerging technologies benefit from clear rules of the road. Trust in AI is also an important factor in how businesses brand their products and services and retain customer loyalty. Strict and complicated rules, however, will only stifle Europe’s digital transformation, investment, and long-term economic relevancy. While we are in the early days of the EU’s legislative process, we should consider a few of Europe’s assumptions underlying its big bet.

Assumption #1: New Regulation Will Help, Not Hinder Europe’s Competitiveness

The AI Act would impose a long list of obligations onto AI products and services deemed as “high risk.” This includes requirements on testing, training, and validating algorithms, ensuring human oversight, and meeting standards of accuracy, robustness, and cybersecurity. Businesses would need to prove that their AI systems conform with these requirements before placing them onto the European market. The operating assumption here is that new regulation will foster trust in AI and, by extension, Europe’s competitiveness.

Yet the cost to meet the proposed requirements is staggering. According to one study sponsored by the European Commission, businesses would need as much as $400,000 upfront just to set up a “quality management system.” Few startups or small and medium-sized businesses can pay this price of admission into the AI marketplace, let alone the additional costs associated with compliance. If the EU wishes to stay in the global AI race, then AI cannot be the preserve of businesses with vast legal and engineering budgets. As the General Data Protection Regulation (GDPR) demonstrated, heavy handed laws may have the unintended consequence of inhibiting the next generation of European digital players. All in all, the AI Act would eat up as much as 17% of AI investment in Europe. One wonders if that money would be better spent on bringing innovative products and services to market.

Assumption #2: Europe Should be the World’s Leading AI Regulator

The AI Act would govern artificial intelligence well beyond Europe’s borders. Given the bill’s broad territorial scope, companies outside of Europe that feed into complex software supply chains and business relationships may find themselves subject to European law. Observers have rightly mused that the EU’s goal of regulating “statistical approaches [and] Bayesian estimation” would entangle activities as mundane as their high-school level statistics class. The draft legislation would also grant regulators new authorities to fine violators up to 6% of their global annual turnover, regardless of where that business’s revenue is generated. Businesses that find themselves covered by the future law may also have to comply with EU-specific technical standards. The proposed measure would grant the European Commission authority to unilaterally adopt new ones wherever it finds existing standards insufficient. This contrasts with the multi-stakeholder and voluntary approach to standards development long championed by the U.S. and the global business community.

Assumption #3: Handing over Proprietary Data, Source Code, and Algorithms to Regulators is a Good Idea

Under the AI Act, European regulators would have the authority to demand access to businesses’ data, source code, and algorithms. While there may be precedent for this practice in certain limited circumstances, this is a broad regulatory authority minus important safeguards. At best, this would expose valuable intellectual property and trade secrets to cyberattack. As we learned from the recent European Medicines Agency hack, regulators are prime targets for cyber criminals seeking the crown jewels of cutting-edge technology. At worst, this is indicative of a broader trend in Europe that devalues companies’ investments in data and data-driven innovations. Under the Digital Markets Act, for example, so-called “gatekeepers” (read: American companies) would be required to share their data and algorithms with their European competitors.

Assumption #4: Europe Needs More Regulators and Regulation

With the AI Act, the European Commission seems to be embracing regulatory complexity. Under the current proposal, national governments can designate a dizzying array of “supervisory authorities,” “notifying authorities,” and “market surveillance authorities.” These bodies would be under no obligation to coordinate how they interpret and enforce Europe’s new AI rules across 27 member states. AI governance frameworks should recognize the diversity of AI applications and, wherever possible, leverage existing rules and regulators. But erecting a new maze of institutions on top of existing laws that already govern different aspects of AI will serve only to slow the ability of businesses to develop and use AI products and services in Europe.

Three years after its enactment, GDPR is a case study in the EU’s regulatory fragmentation. Core elements of the law are applied differently by regulators across Europe’s member states. GDPR’s great promise—that enterprises would only need to interact with one data protection authority when doing business across the Single Market—is unfulfilled. In practice, this “one-stop-shop” mechanism has been narrowed and undermined by regulators competing with one another for jurisdiction to fine large companies. As drafted, the AI Act reflects none of these lessons.

The Commission’s regulatory gamble should serve as a wake-up call to Washington that global rules for AI are being written elsewhere. If the U.S. does not provide the world with a compelling alternative—namely, a light-touch framework that promotes public trust and enables innovation—then foreign countries may follow the EU’s lead, incorporating many of the assumptions above. As we have seen in privacy and data protection policy since GDPR’s enactment, this will have significant implications for the ability of U.S. businesses to trade with the rest of the world, often to the detriment of American workers and exporters.

Fully implementing the bipartisan-supported Guidance for Regulation of Artificial Intelligence Applications is an important first step, as is supporting work by the National Institute for Standards and Technology to develop an AI risk management framework to advance trustworthy AI. But we must quicken the pace. Europe's big bet may rest on questionable assumptions—but that’s no excuse for U.S. policymakers to stay on the sidelines.

About the authors

Evangelos Razis

Evangelos Razis is former Director at the U.S. Chamber of Commerce’s Center for Global Regulatory Cooperation.