Artificial Intelligence, Finance, and the Law

The progress and promise realized and presented by artificial intelligence in finance has been remarkable. It has made finance cheaper, faster, larger, more accessible, more profitable, and more efficient in many ways. Yet for all the significant progress and promise made possible by financial artificial intelligence, it also presents serious risks and limitations.

My recent article, Artificial Intelligence, Finance, and the Law, in the Fordham Law Review, offers a study of those risks and limitations – the ways artificial intelligence and misunderstandings of it can harm and hinder law, finance, and society. It provides a broad examination of inherent and structural risks and limitations present in financial artificial intelligence, explains the implications posed by such dangers, and offers some recommendations for the road ahead. More specifically, four categories of risks and limitations are particularly noteworthy in connection with financial artificial intelligence: (1) coding limitations, (2) data bias, (3) virtual threats, and (4) systemic risks. Individually and collectively, these four perils loom large as potential inherent and structural dangers of financial artificial intelligence.

First, artificial intelligence programs are limited by their underlying code and their ability to fully and properly capture all that is happening in the marketplace. There are simply too many complex, ineffable human and other elements of financial markets and our uncertain world that cannot be fully or properly captured by artificial lines of code, no matter how comprehensive or smart. As such, computer codes and models frequently make oversimplified assumptions about the workings of the marketplace, thus making them appear more predictive and productive than they actually are. As a result, financial artificial intelligence tools have the capacity to make powerful predictions and produce incredible value that move and grow markets. However, because of their limitations, these programs and models also operate with potentially dangerous blind spots to the marketplace. Therefore, as we grow more reliant and assured about the promise of financial artificial intelligence, we should also grow more mindful of its limited capacity to fully comprehend the ineffable complexities of a still largely human-driven marketplace.

Second, discriminatory data and algorithmic bias represent a set of critical risks and limitations associated with financial artificial intelligence. Most artificial intelligence initially needs large quantities of data to teach the programs to recognize certain patterns and make certain predictions. At its best, artificial intelligence can uncover valuable new insights and observations from troves of big data, otherwise impossible without its awesome processing powers. At its worst, artificial intelligence can exacerbate misguided old practices and aggravate past social harms with its incredible processing powers and the veneer of novel objectivity, since discriminatory humans are associated with the decisions. While one should appreciate the incredible potential of financial artificial intelligence, one should also be cognizant of the potential risks inherent in systems built with data that may reflect harmful past biases against the marginalized and the poor. As a society, we do not want these prejudices replicated in the present or perpetuated in the future. One should be particularly mindful of underlying data contexts and applications that are being selected and coded by flawed humans influenced by all of our biases, prejudices, and fallacies.

Third, another key category of risks associated with financial artificial intelligence involves the rise of virtual threats and cyber conflicts in the financial system. The emergence of financial artificial intelligence demonstrates the growing reliance on financial information, and this burgeoning reliance has made the financial industry ever more vulnerable to virtual threats. A recent industry report found that the financial industry remains the top target for cyber criminals. As the financial industry continues to evolve into a high-tech industry, it will surely face even more of the same types of cyber challenges confronted by most traditional technology companies. These new vulnerabilities and threats will require greater cybersecurity coordination and efforts internally from the companies, and the industry writ large. A recent survey of financial firms and their law firms suggests that the financial industry can do meaningfully better in terms of organizing and executing on their cybersecurity, technology, and compliance resources and efforts to combat cybersecurity threats.

Fourth, the rise of financial artificial intelligence and related financial technology heightens the dangers of systemic risk and major financial accidents. A growing reliance on artificial intelligence and other forms of technology in the financial industry can exacerbate intertwined systemic risks related to size, speed, and interconnectivity. Risks associated with institutions being too big, too fast, or too interconnected will implicate issues concerning not only their balance sheets but also their data and technology systems. Moreover, the growing complexity of technology increases the risks of serious financial accidents, whereby inevitable technological glitches could lead to financial chaos and catastrophe. As such, while we should appreciate the many new positive outgrowths of financial artificial intelligence for certain firms and institutions, we should also be mindful of the hazards and challenges that it may cause for the entire financial system.

In response to these four major categories of risks and limitations, as detailed in the previously referenced article, public policymakers and private stakeholders in the financial industry should take near-term action in three areas: financial cybersecurity, competition, and societal impact. In particular, policymakers and stakeholders should be more proactive in enhancing financial cybersecurity, promoting competition in the financial sector, and safeguarding the people-centered social purposes of finance. Through a thoughtful combination of public policy and private action, businesses and society can work together in a coordinated fashion to harness the benefits of financial artificial intelligence, while mitigating its harmful effects for individuals, local communities, and the greater economy.

Ultimately, the rise and growth of artificial intelligence in finance and beyond will likely be one of the most significant developments for law, finance, and society in the coming years and decades. The early movements thus offer glimpses of the awesome powers and potential of financial artificial intelligence. Nevertheless, as financial artificial intelligence continues to grow and evolve, we must also become more aware of its potential risks and limitations. We must grow more cognizant of the ways financial artificial intelligence can harm and hinder individual progress as well as societal progress, as we try to build better and smarter financial artificial intelligence – one that is less artificial, more intelligent, and ultimately more humane, and more human.

A version of this article was previously featured in the Columbia Law School Blue Sky Blog. The full paper is available for download here.

1 thought on “Artificial Intelligence, Finance, and the Law”

Leave a Comment