EU AI Paper Final

Published

November 30, 2020

Share

In the ILR’s research titled The Future of AI Liability in the EU, the paper addresses the potential for AI to advance innovation while acknowledging legitimate safety concerns in the European Union.

Machine learning and autonomous decision-making reduce human control by definition and, for that reason, create a myriad of ethical, legal, and practical concerns. As the EU comes to terms with these concerns, it is rightly asking:

  • What role should the EU play in encouraging or limiting the ways in which AI may develop?
  • Does the EU intend to lead or follow?
  • How should a desire to promote bold innovation be balanced against the need to protect consumers?
  • And how well-suited are traditional concepts of liability when machines, not people, will make decisions?

The impact of any EU legislation on AI is potentially vast. Any legislative measures will likely affect broad swathes of consumers and industry, including any businesses active in the AI space that sell their products or services to EU customers.

This ILR research paper breaks down the various elements of the Commission’s emerging position on AI and offers a set of guiding principles for the development of a balanced, flexible, and future-oriented AI liability regime founded on concrete evidence and a careful cost/benefit analysis. These principles include:

  • Taking stock of existing EU measures and industry best practices.
  • Favoring “soft” measures, not overzealous regulation.
  • Consulting with stakeholders and adopting a participative regulatory approach.
  • Coordinating with other institutions and governments.
  • Promoting evidence- and risk-based regulations.
  • Adopting reasonable constraints on liability.

EU AI Paper Final