Michael Richards Michael Richards
Director, Policy, U.S. Chamber of Commerce Technology Engagement Center (C_TEC)

Published

June 24, 2022

Share

When regulating artificial intelligence (AI) in financial services and in the context of America’s global competitiveness, we must be wary of the risks while emphatically promoting democratic values, said experts testifying at the U.S. Chamber of Commerce’s AI Commission field hearing in London. 

Importance of AI in financial services 

“Financial services need AI…There’s lots and lots of legacy tech and manual data processes,” testified Rupak Ghose, Chief Operating Officer at Galytix, an AI-driven FinTech firm.  

Before fully embracing AI though, Ghose emphasized the need to examine the impact of potential bad actors and the interplay between different AI models. AI bots, for instance, have the scale and influence to move markets with a single tweet.  

Ghose added, “Rules are only as good as the cops we have that implement those rules...the question is, do you have the right people in place in the private sector and government to police this?” 

Regulating AI 

According to Philip Lockwood, Deputy Head of Innovation at NATO, the primary driver behind innovation and cutting-edge technology has shifted from the government and defense industry to the private sector. 

“If you look at the list of technologies on our [emerging and disruptive technologies] list, AI, quantum autonomy, biotech, human enhancement, these sorts of things, the vast majority of the spend on this is actually coming from the private sector.” So, the defense and security use of AI is inextricably tied to commercial uses. Currently, the EU draft regulation for AI exempts defense and security or military use from the scope of its regulation. However, “if most AI development is really being driven for commercial purposes, most of the AI actually that we’re interested in at a fundamental level is actually in scope of the regulation. And so, it has a very significant impact [on our work].” 

On regulation of AI, Kenneth Cukier, Deputy Executive Editor and Host of Babbage Podcast at the Economist, articulated a difference between input privacy and output privacy.  

“The input privacy is the data that goes into the model, and the output privacy is how the data is used...Often, in privacy law, we’re regulating the collection of the data, because it’s easier...but on use, it’s a little bit trickier,” said Cukier. To illustrate this difference, he discussed photographs that people upload onto social media, which we’d want to keep. But if there’s a platform that uses our photographs in ways that we aren’t comfortable with, such as in law enforcement, then we’ll want to regulate that output privacy.  

AI’s impact on society 

“Most technologies for the last several centuries have been a democratizing force...The problem with AI is that it seems at least so far today to be very hierarchical and not democratizing,” Cukier said. “It requires increasing levels of scale and resources to be extremely good at it…those companies that have adopted AI are outperforming others at 10 to 20 times the baseline in their industry.”  

But the answer is not to pull down the winners. “We should let the winners flourish, but help people, not the firms. I think public policy should focus on that,” he added. 

Carissa Véliz, Associate Professor in the Faculty of Philosophy and the Institute for Ethics in AI and Tutorial Fellow at the University of Oxford, also highlighted how AI may affect people.  

“The way we’re deploying AI is changing the distribution of risk in society in problematic ways, especially in the financial sector,” she said. Referencing the 2008 financial crisis in how the responsibility for risks shifted from banks to individuals, Véliz cautioned, “There was a disconnect between the people that made the risky decisions and the people who are going to pay the price for when things went wrong… And I think we might be facing a similar kind of risk in which we use an AI to minimize risk for an institution…but it's actually just pushing risk on the shoulders of individuals.” 

Global competition for AI influence  

Witnesses emphasized the differing values-based approaches between Western countries and more authoritarian regimes like China, Russia, and others.  

“We’re going to have spheres of influence on AI, similar to how we’ve had in international relations,” Cukier stated. “We’re going to have a Western flavor of AI based on Western values – it’s going to make the balance between America and Europe over GDPR seem like a small trifle because there’s so much more that brings us together than separates us – versus the authoritarian countries, China, Russia, many others, and their flavor of AI.”  

Moreover, Cukier touched on how this battle of influence is going to be played out in markets like Latin America, Asia, and Africa, “So the stakes are really high. And I think the Chamber of Commerce has a great role to ensure that the cluster values are part of the AI conversation.”  

Is the U.S. falling behind China? 

Some speakers discussed a widening gap between the U.S. and China. “In financial services, I think more than any other industry, China is ahead on AI,” noted Ghose. “They are way ahead in terms of mass consumption of AI in the financial services sector.” 

“China is actually outpacing U.S. in terms of STEM PHD growth,” said Nathan Benaich, Founder and General Partner at Airstreet, a venture capital firm investing in AI-first technology and life science companies. “They’re actually projected to reach double the number of STEM PHD students by 2025. Meanwhile, in the Western world, you see numerous examples of depleting STEM budget and that’s driving this exodus in the industry.” 

Exporting democratic values 

In comparing our progress with China, however, our aim should not be to emulate or compete against their model, stressed Véliz.  

“Instead of moving away from a system like China’s techno-authoritarian style, we're actually trying to compete with them. And I think that this is a mistake,” she said. “This is a time to defend our liberal values and for democracies of the world to come together...Given that China is exporting surveillance, our job as a liberal democracy is to export privacy.” 

Lockwood echoed this point, “We believe that accelerating responsible innovation is critical to ensure that we’re building trust and accountability in these areas, and that’s on the basis of our shared democratic principles…We have to be able to demonstrate that we are taking concrete steps and actions to be able to bridge that gap and to demonstrate that we are different, in fact, from other adversaries and competitors in this space.” 

What’s next? 

To explore critical issues around AI, the U.S. Chamber AI Commission is hosting a series of field hearings in the U.S. and abroad to hear from experts on a range of topics. Past hearings took place in Austin, TX; Cleveland, OH; Palo Alto, CA; and London, UK. The final field hearing will take place in Washington, DC, on July 21, focusing on national security and intellectual property as it relates to artificial intelligence. 

Learn more about the AI Commission here

About the authors

Michael Richards

Michael Richards