Public Statements & Remarks

Manuel F. Cohen Endowed Lecture by Commissioner Kristin Johnson at George Washington University Law School

Artificial Intelligence and The Future of Finance

October 12, 2023

Thank you Dean Dayna Matthew and Professor Omari Simmons for inviting me to deliver the 2023 Manuel F. Cohen Lecture.

Born in Brooklyn at the turn of the century—Manuel Frederick Cohen (“Manny”)—was a dedicated public servant who served for almost three decades at the Securities Exchange Commission (SEC). In 1961, Manny was nominated by President Kennedy to serve as a Commissioner of the SEC and, three years later, President Johnson nominated Manny to serve as Chairman of the SEC.[1]

During his time as Chair, Manny Cohen led critical market reforms including reforms that targeted unfair practices that leveraged asymmetries of information such as insider trading. Championing investor protection initiatives, Cohen worked to construct effective checks and balances in trading markets and to ensure fair and equal access to markets.

This year we will celebrate the 90th anniversary of the SEC and the 50th anniversary of the Commodity Futures Trading Commission (CFTC) where I currently and proudly serve as a Commissioner.

In many ways, we have closed the chapter on the era of Chairman Cohen’s service. Long gone is the operational infrastructure characterized by the open outcry method of trading—defined by floor brokers shouting in colorful and curious language and wildly gesticulating in trading pits. While this era remains popular for Hollywood films like Trading Places or The Wolf of Wall Street, trading markets today are complex, diverse, and deeply influenced by the evolution of technology.

The transformative impact of the introduction of the internet, rapidly advancing electronic and digital trading technologies, cloud-based servers warehousing transaction data, predictive or algorithmic trading strategies, and the potential for quantum computing, have all rapidly and radically altered the operational infrastructure and governance of trading markets.

My remarks today, which reflect only my views, advance critical questions regarding the development, adoption, integration, and regulation of artificial intelligence (AI). More specifically, my remarks will focus on the integration of AI in rapidly evolving financial markets.

AI Technologies in the World of Finance

The promise and the potential pitfalls of generative AI technologies have engendered an international discourse that presents staggering questions for market regulators tasked with ensuring the integrity of markets and protecting the most vulnerable customers.

My hope is that you will leave this afternoon with a stronger understanding of the attributes of the technologies commonly described as AI, a deeper appreciation for the benefits and limits of AI, and a sense of urgency with respect to the need to create and implement well-tailored regulation that guides—in real-time—decisions by market participants or federal administrative agencies to adopt, implement, or incorporate these technologies in their decision-making processes.

“AI” refers to the ability of machines or computer-controlled robots to perform tasks that are commonly associated with intelligent (human) beings. These tasks include reasoning, discovering meaning, generalizing, and learning from past experiences.

The Need for Regulatory Guardrails: Understanding and Integrating AI in Financial Markets

In two essays, published in the Fordham Law Review and the Journal of International and Comparative Law, respectively, I have raised alarms regarding the need for regulatory guardrails as markets explore and consider integrating supervised and unsupervised machine learning algorithms or generative AI.

In these early works, I explained that “[e]merging technologies promise to play a transformative role in our society, enabling driverless cars, enhanced accuracy and efficiency in disease mapping, and purportedly greater and less expensive access to certain consumer services, including consumer financial services.”[2] Yet, with each new use case, we must consider previously identified as well as emerging risks engendered by the integration of the technology.

AI describes the assemblage of technologies that enable machines to learn to execute these tasks. As my co-author and I explain in one of the essays:

[a]t the most general level, the term AI refers to “a set of techniques aimed at approximating some aspect of human or animal cognition” relying on a system of algorithms to simulate human learning and a machine to execute the correlated activity. Aside from this general sketch, a generally agreed-upon definition of AI remains elusive. Instead, the term Al refers to a large set of information or computer sciences[.]

As Professor Harry Surden points out, “AI is truly an interdisciplinary enterprise that incorporates ideas, techniques, and researchers from multiple fields, including statistics, linguistics, robotics, electrical engineering, mathematics, neuroscience, economics, logic, and philosophy, to name just a few.” In machine learning, computers compute data using an algorithm to perform an assigned objective function, make predictions, and automate certain tasks.

Machine learning algorithms may rely on a variety of computational techniques, including supervised, unsupervised, and reinforcement learning. In supervised learning, “the learning algorithm is given inputs and desired outputs with the goal of learning which rules lead to the desired outputs.” In unsupervised learning, “the learning algorithm is left on its own to determine the relationships within a dataset.” Reinforcement learning, for its part, involves providing feedback to the algorithm regarding how well it makes connections between inputs and outputs as the algorithm navigates the dataset. Upon discovering patterns, machine learning can be programmed to apply these patterns to predict future outcomes.

However, as used in this context, the word “learning” does not refer to “the more holistic concept referred to when people speak of human learning.” Indeed, “machine learning does not require a computer to engage in higher-order cognitive skills like reasoning or understanding of abstract concepts.”

This leaves AI methods vulnerable to underperforming values-centered analysis that focuses on principles such as equity, justice, transparency, and ethics. As a result, the increasing pervasiveness of AI throughout society gives rise to new concerns about the implications of AI for a just society. [3]

For innovative financial services firms, AI may enable faster, less expensive, and more inclusive access to basic financial services, mitigating our most vulnerable consumers’ reliance on remarkably high-priced check cashing services, predatory interest rate payday loans, and increasingly sparse (particularly for those living in banking deserts) brick-and-mortar bank branches or non-bank affiliated ATMs. Developers of predictive analytics promise to offer a diversity of lower-cost, more accessible investment advising tools.[4]

Yet, as I and others have pointed out, the promise of AI, in all too many instances, is not so easily achieved. Specific use cases may offer greater insights.

Market Participant Use Cases

Market participants and financial services professionals quickly embraced innovative AI applications. For many decades, financial services firms have relied on algorithmic data assessments to predict pricing or assess risk exposure.

Traditional machine learning identifies patterns and seeks to make predictions regarding future events. For market participants, generative AI has the potential to significantly improve the performance and efficiency of pricing models. These benefits spill over into relationships that are ancillary to pre- and post-trade execution activities in financial markets. AI has significant promise to smooth and expedite clearing and settlement processes for securities, derivatives, and banking transactions.

AI may reduce the time allocated to deep-dive searches, analysis, and processing of document or data review, enhancing the efficiency of searches and data analytics tools. In addition, such searches and analysis may be more accurate and completed more rapidly.

In addition to private sector adoption, evidence suggests increasing exploration and integration of AI by administrative agencies.

Federal Financial Market Regulators’ Integration of AI

In December 2018, the Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency issued a joint statement encouraging banks to consider, evaluate, and where appropriate, responsibly implement innovative approaches to Bank Secrecy Act reporting obligations including compliance with know your customer and anti-money laundering obligations.[5]

Over the last decade, federal regulatory agencies began to adopt AI tools to facilitate the execution of core government tasks, including the enforcement of regulatory mandates and adjudication of benefits and privileges. Other use cases that are increasingly common include regulatory analysis, rulemaking, internal personnel management, citizen engagement, and service delivery.[6]

According to one of several reports by academics and public interest stakeholders commissioned by the Administrative Conference of the United States, government agencies in the U.S.

are using machine learning algorithms in a variety of contexts to support administrative decision-making. The federal government relies on machine learning algorithms to automate tedious, voluminous tasks and to parse through data to extract patterns that even experts could miss. Meteorologists at the National Oceanic and Atmospheric Administration use machine learning to improve forecasts of severe weather events. Chemists at the Environmental Protection Agency use the program ToxCast to help the agency predict toxicities of chemical compounds to further analyze. . . . [R]esearchers from Stanford University and New York University expand upon particularly promising use cases of federal agency deployment of AI. AI usage is primarily concentrated among only a handful of the hundreds of agencies, bureaus, and offices at the federal level: these include the Office of Justice Programs, Securities and Exchange Commission, National Aeronautics and Space Administration, Food and Drug Administration, United States Geological Survey, United States Postal Service, Social Security Administration, United States Patent and Trademark Office, Bureau of Labor Statistics, and Customs and Border Protection.[7]

Specifically, for civil enforcement in financial markets,

algorithms may enforce agency regulations by “shrinking the haystack” of potential violators to better allocate scarce resources and assist the agency in balancing prosecutorial discretion with accountability. At the SEC, algorithmic enforcement targets fraud in accounting and financial reporting, trading-based market misconduct, insider trading, and unlawful investment advisors; these results are handed off to human enforcement staff who continue to work the cases.[8]

Supervisory Technology

The CFTC staff includes surveillance analysts, forensic economists, and futures trading investigators, each of whom identify and investigate potential violations. These groups use supervisory technology (SupTech) in enforcement. Over the past few years, the CFTC has transitioned much of its data intake and data analysis to a cloud-based architecture. This increases the flexibility and reliability of our data systems and allows us to scale them as necessary. This transition will allow the Commission to store, analyze, and ingest this data more cost-effectively and efficiently.

Through its regulatory authority, the CFTC requires the submission by markets and/or market participants of several different and significant data sets. There are three broad categories of data that are of primary interest. First, end of day position data for futures positions, reported by futures commission merchants and clearing members of derivatives clearing organizations. We use this data set to monitor for large concentrations in markets. The second set of transaction data reported by exchanges includes every executed trade on futures exchanges with nanosecond-level timestamps and account-level and trader-level identification. This data set can be used to find market manipulation schemes. Finally, order message data is also reported by exchanges, and includes each type of message that affects the exchange order book. This data set can be used to identify spoofing and market microstructure manipulation schemes.

Because the CFTC is able to obtain and aggregate data across markets and products, we can develop a more comprehensive understanding of market activity and market integrity. These technologies enable the Commission to better detect potential misconduct or other market disruption more effectively and take appropriate action earlier.

In a recent ACUS report, SEC official shared that the agency is similarly relying on SupTech to enhance its supervisory and surveillance programs.

“The SEC’s suite of algorithmic tools provides a glimpse of a potential revolution regulatory enforcement.”[9] These technologies, however, also reveal complications that may arise when integrating AI technology into an agency’s work.

For example, the SEC has had to confront the possibility of training their AI on biased data that reflects the judgments of former SEC employees. As the ACUS report notes, “data challenges in the enforcement context tend to take one of two forms, reflecting either a lack of randomization or the difficulty of finding accurate ground truth in training data.”[10]  The report also states that “[t]he SEC is cognizant of these challenges and is attempting to mitigate them.[11]

In 2016, for example, “the SEC approved a joint plan with FINRA and SROs to develop a consolidated audit trail (CAT)” which “requires SROs and broker-dealers to significantly enhance their information technology capacities to maintain a comprehensive database of granular trading activity in the U.S. equity and options markets, thus broadening reporting to every trade quote and order, origination, modification, execution, routing, and cancellation.”[12] Using this data for SEC enforcement tools, such as insider trading detectors, “stands to substantially improve accuracy and reliability” of these systems.[13]

Another challenge involves how to best leverage agency know-how in their enforcement regime in the creation and utilization of highly technical new algorithmic enforcement tools. “Many of the new algorithmic enforcement tools will, as with the SEC’s Form ADV Predictor Tool, rely on [neuro-linguistic programming (NLP)] techniques to derive semantic meaning from unstructured texts.”[14]

NLP systems historically have performed worse when analyzing text from a niche area and instead prefer generalized text.[15] This may pose issues in using this technology to regulate an industry like finance, which uses its own unique jargon, slang, and acronyms that the general public is not familiar with. This means that “developing workable algorithmic governance tools may require more than off-the-shelf and third-party implementations or open-source libraries.”[16] Instead, it may be more effective for an agency like the SEC to use finance-specific datasets rather than standard ones.[17] This may be best achieved through the creation of algorithmic tools within the agency rather than licensing the technology from a third-party provider. As the ACUS report explains, “[t]he dynamic nature of wrongdoing and the subtlety and complexity of many enforcement tasks mean that the design, deployment, and maintenance of algorithmic enforcement tools may be best achieved … [by] technologists sited within the agency who understand subtle and complex governance tasks.”[18]

The Financial Industry Regulatory Authority (FINRA) uses AI and other data-driven tools to monitor trades and promote market security. Market participants, including firms like futures commission merchants as well as exchanges and SROs, have consistently been integrating data-driven tools and AI to increase efficiency in their platforms, analysis, and/or customer operations. There have also been instances of these groups using AI for regulatory purposes by digitizing, reviewing, and interpreting new and existing regulatory intelligence so that they can ensure they are compliant as an operator. Finally, there are a number of vendors who sell RegTech software to market participants to assist them in risk management, as well as compliance functions like detection of potential market abuse.

The promise of AI, even if realized, is, however, only part of the calculus that must influence regulation. All too often, AI technologies also engender significant risks.

Moving Beyond Deep-Fakes

Earlier this year, the Federal Trade Commission issued a consumer alert encouraging consumers to be vigilant regarding the use of voice clones generated by AI.[19] The consumer alert represented the newest high-water mark in the intersectionality of the relationships among developers deploying cutting-edge technologies, consumers, and federal agencies tasked with consumer protection.

A few years ago, a branch manager of a Japanese company in Hong Kong received a call from the director of his parent business.[20] The instructions delivered during the call indicated that, in connection with a pending acquisition, the bank employee should transfer $35 million to a designated account. Having received emails confirming the legitimacy of the transaction, and because he spoke often enough with the director by phone on business matters to recognize his voice, the branch manager kindly obliged, followed the instructions, and transferred the funds.

This Ocean’s Eleven-styled operation by fraudsters cloning familiar voices to facilitate a heist is concerning. Couple the possibility of increasingly precise voice-cloning with the increasing difficulty of distinguishing deep fake videos to propagate misinformation, and there is tremendous potential for consumer harm, market disruption, and far more troubling adaptations.

Closer to home, state and federal regulators are warning of synthetic fraud – the creation of fraud scams that aggregate accurate information regarding bank account holders or loan applicants with fabricated data that makes it exceptionally difficult for financial institutions to detect the fraud.[21] As one commentator explains, “the very technology that empowers us may also imperil us.”[22]

For the CFTC, the potential misuse or abuse of AI technology to perpetuate fraud using the well-cloned voices of friends, colleagues, and loved ones, fake videos, or other types of AI-generated financial fraud triggers grave concerns.

Focusing on the Flash Crash: Navigating Innovation in Enforcement Actions

Focusing on the Flash Crash, recall the challenges of disentangling the use of technology for purposes of establishing elements in enforcement cases. The SEC and CFTC enforcement divisions initially concluded that an automated algorithm deployed trade orders for a single institutional investor (Waddell & Reed), rapidly executing the sale of 75,000 E-Mini S&P 500 future contracts (valued at approximately $4.1 billion) and triggering the ephemeral crash.[23] Several years later, in 2015, the Department of Justice and the CFTC investigations revealed that a rogue London-based futures trader—Navinder Singh Sarao—had manipulated the E-Mini S&P 500 by using an algorithm to flood the Chicago Mercantile Exchange (CME) with sell orders for E-Mini S&P 500 contracts.[24]

Using a high-frequency trading strategy known as “spoofing,” Sarao entered tens of millions of dollars of orders intended to drive down the price of certain futures contracts.[25] After submitting sell orders, he entered orders to buy the same contracts at artificially depressed prices. Contemporaneously, he cancelled the original sell orders that drove the prices downward before any such orders closed. In an effort to manipulate the market, he submitted orders intending to withdraw the same orders before an exchange or clearinghouse closed the trade. Sarao never intended to sell, but his sell orders influenced trading across international financial markets.  Before and after the Flash Crash, Sarao generated $50 million in profits. In November of 2016, Sarao pled guilty to one count of wire fraud and one count of “spoofing.”

This simplified description does not reach the complexities of the relevant technologies nor address the impact of the incorporation of these technologies in markets that are facing unique and evolving market structure. The rise of alternative trading systems, private pools of capital, and the panoply of digital asset trading service providers amplifies the challenges for the development of regulation and policy as well as successful enforcement actions.

Finding a Path Forward

In 2008, I began to research and publish literature examining the integration of AI in financial markets.[26] Not long after, I began to support federal agencies, including our Commission and the SEC, in the development of regulations addressing concerns emerging from the adoption of technologies that accelerate trading.

Three quick observations:

  • AI technologies tout tremendous promise—a promise to reduce frictions and enhance efficiency, permitting trading at the speed of light; yet, there are many reasons to carefully interrogate these promises to ensure that we mitigate exposure to potential perils that may also follow from adopting AI;
  • We must ensure that responsible AI effectively ensures accountability, and accountability requires transparency and visibility; and
  • The integration of AI by our largest and most complex financial institutions, including exchanges, clearinghouses, and those who provide services supporting execution, clearing, and settlement (of, for example, sophisticated derivatives contracts), as well as transactions offering the least complex financial services (consumer payment transfer services or residential mortgages) to the most vulnerable consumers must be subject to sufficiently rigorous evaluation (whether auditing or alternative approaches) and regulations.

A Call to Action: Regulating AI

With increasing urgency, Congress, regulators, and policymakers acknowledge the need for decisive action in harnessing the benefits while mitigating the risks of AI. Over the last several months, the White House has launched a series of public discussions regarding AI and, we anticipate imminently the issuance of an executive order that outlines critical policy initiatives and a direction of travel for technology developers, financial services firms, federal regulatory agencies, consumers and consumer protection advocates regarding the integration of predictive analytical decision-making tools that rely on supervised or unsupervised machine learning algorithms or neural networks. For simplicity, I will colloquially refer to this assemblage of distinct technologies as generative AI.

Last year, President Biden announced a Blueprint for an AI Bill of Rights, which sets forth principles that “help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.”[27]  The President recognized that technology, data, and automated systems may be used in ways that threaten the rights of the American public.[28]

Last month, during a speech in San Francisco, President Biden delivered remarks on the risks and opportunities posed by AI.[29] He noted that he “signed an executive order to direct my Cabinet to root out bias in the design and use of AI” and champions Americans “lead[ing] the way and drive breakthroughs in this critical area, from cybersecurity to public health to [. . .] agriculture to education and, quite frankly, so much more.”[30] The President echoed reflections and concerns that will shape a values-driven discourse on the integration of AI.

Congress is also casting a spotlight on AI.  Earlier this year, Senate Majority Leader Chuck Schumer, from my home state of many years—New York—launched a first-of-its-kind high-level framework and began to outline a new regulatory regime for AI, focusing on transparency, accountability, bias, discrimination, and privacy and engaging leading experts to help inform the proposal.[31]

The EU is in the process of passing the AI Act, which will regulate the use of AI in the EU by imposing varying levels of obligations on providers and users of AI depending on the riskiness of the AI tool. The rules are described as promoting “human-centric and trustworthy AI” while protecting the “health, safety, fundamental rights and democracy from its harmful effects” and imposing guardrails around governance, disclosure, transparency, and policing illegal content.[32]

For those of us who have spent years thinking, researching, and writing about the potential and concerns surrounding the integration of AI, this is a welcomed approach.

Digital Assets Task Force

As of today, the Commission has brought 133 cases—either complaints filed in federal court or resolutions entered in administrative proceedings—involving digital assets. Fiscal year 2023 was our busiest yet; the Commission charged forty-seven separate digital assets cases, forty-four of which alleged or found fraudulent or manipulative misconduct. Given the public’s increasing interest in digital asset investment and crypto technology, we should certainly expect that these numbers continue to climb in the next year.

Just last week, I gave a speech at the Baker Institute at Rice University examining two relatively new applications of blockchain technology that put investors in our markets at risk: tokenizing carbon offsets, which I described as layering two unregulated markets on top of one another; and decentralized finance protocols, which allow the public to take leveraged (and more volatile) derivative positions in digital assets without purchasing the assets themselves—and, for now, without the protection of the regulations that apply to traditional financial markets.[33]

The Commission’s Digital Assets Task Force plays a vital role in helping to ensure that the Commission will continue to protect investors in our markets from the risks posed by new investment technologies. The task force works with our Division of Enforcement (Division) investigative and trial teams to provide a deep repository of subject-matter expertise, coordinate efforts across our various offices within the Division, and maintain the Commission’s presence at the vanguard of regulatory law enforcement in this burgeoning sector.

Registration as a Means to Protect Investors

While fraud cases often grab the biggest headlines, the Division’s work on non-fraud, registration-based cases is equally critical. The Commission has brought seventeen non-fraud cases in its history charging digital asset companies with failing to register as various intermediaries subject to our jurisdiction, dating back to the case against Coinflip in 2015 in which we charged a failure to register as a swap execution facility (SEF) or designated contract market (DCM). Many other cases charge a failure to register as a futures commission merchant (FCM).

The non-fraud registration cases are evolving. In the past twelve months, we have brought five registration cases against decentralized finance (DeFi) entities. These cases reflect those brought against centralized crypto platforms, but with the added wrinkle of the defendant being an open-source protocol that purports not to have an owner or operator. That kind of corporate structure, such as it is, is not a defense to allowing investors to access risky derivative instruments that with any other underlying asset would clearly be subject to the CFTC’s registration requirements.

These cases are critical to our mission to protect the public from unreasonable risk. In traditional financial markets, these regulated entities serve as a point of intermediation at which the Commission can impose regulatory requirements, for example to make disclosures to customers, to manage risk, to maintain capital requirements, or to conduct anti-money laundering and know-your-customer onboarding. Imposing these requirements serves to reduce risk faced by the most vulnerable participants in our markets, specifically retail investors. The regulatory requirements work to balance out asymmetries of information, and reduce fraud by shining a light in dark corners.

I have in recent months called for legislation and regulation to codify what has become a general consensus: regulators must be able to step in to protect the growing marketplace for digital assets. Charging a known fraudster serves many purposes of deterrence, retribution, and restitution, among others. But robust regulatory requirements ideally prevent fraud before it occurs in the first place. At the same time that crypto technology is bringing fintech into the future, regulations need to keep pace.

New Technology, New Challenges

Although the frauds remain, at base, the same, the new context of the crypto-economy poses new challenges for law enforcement. As crypto derivatives have become listed on existing registered exchanges, the CFTC has been able to incorporate surveillance of those products into our existing infrastructure. This is no small thing; thirty of our 133 digital asset cases have involved the marketing or trading of digital asset derivatives. Ultimately, the products are still futures, options, or swaps, and we have the ability to analyze them like we would any such product whose underlying asset was in agriculture, precious metals, or interest rates.

The spot market presents well-documented problems. Investigating a digital asset fraud case often requires making sense of an enormous number of blockchain transactions through wallets that can be difficult to attribute to a particular individual. The CFTC continues to develop new tools to hone our ability to surveil digital asset markets, including starting to think deeply about ways we can harness generative artificial intelligence and large language models to assist our mandate to protect investors.

In 2020, LabCFTC’s Project Streetlamp resulted in perhaps our first adoption of AI to aid in enforcement, although for the more limited project of identifying unregistered foreign entities.[34] LabCFTC has now grown into our Office of Technology Innovation, whose goal is to continue to push the ways that we harness new technology. We recently appointed our first Chief Data Scientist to work, among other things, on building layers of artificial intelligence over our agency’s already robust advanced analytics. I am excited for what our agency will prove to be capable of in the coming months.

Racing at Light-Speed in the Dark

Even when technologies offer great promise, we must ensure compliance with existing legal and regulatory guardrails, including those that are designed to improve a regulator’s ability to effectively police AI.  In an academic paper that I published five years ago, I explained:

Federal statutes [and regulation] regulate risk-taking by financial market intermediaries including the broker-dealers who execute trades and the securities exchange and clearinghouse platforms where trading occurs. For almost a century, these statutes have enforced norms that encourage disclosure, transparency, and fairness. In modern markets, innovation, and technology challenge these core principles of regulation. The engineering of computer-driven automated trade execution, the development of algorithmic trading, and the introduction of high frequency trading strategies accompany a number of important shifts in financial market intermediation.[35]

That academic project traced the evolution of the adoption of computer-based trading and other technologies from the Paperwork Crisis of 1967, the Stock Market Crash of 1987, the Financial Crisis of 2007, and the Flash Crash in 2010.

Coupled with the accelerated pace of algorithmically-driven trading on digital platforms and a global, almost instantaneous, internet-based infrastructure, I raised alarms regarding the need for guardrails that effectively address any integration of high-frequency trading, permitting co-location, evolving enforcement challenges and other concerns that arise from nuanced and sometimes impermissible trading strategies such as front-running and spoofing.

A Word About Bias

Before I close, please allow me to say a few words that relate to more general concerns regarding the integration of AI. While my remarks have largely focused on the technical benefits and limits of AI, the broader literature raises a number of issues and concerns that I have not explored here. Specifically, there are deep concerns that bias and discrimination in underlying data may be amplified through the use of generative AI.

Facial recognition software that may be helpful in certain law enforcement contexts must be subject to fairness and due process constraints. The limitations of such AI platforms to recognize or distinguish facial features of individuals with darker complexions and the established imbalanced representations of women of color in popular training data sets should also be carefully accounted for. Finally, individual consumer privacy concerns should be at the center of any careful evaluation of AI platforms.

There is a risk that automated programs, algorithms, etc. can introduce unintended bias as a consequence of the way they have been trained, or the datasets used to build out their knowledge base. In the context of financial market regulation, regulating bodies, such as FINRA and the SEC, require that data-driven tools, including AI, are tested periodically to ensure it is working as desired. Additionally, these tools must include interaction with humans to avoid unintended self-evolution/change.

There are concerns about the protection of the data used by these tools. It is critical to ensure that the data, whether taken directly from regulated entities or given by them periodically, is stored securely and used only for its intended purpose/analysis. There are also risks associated with the use of data that individuals do not know will be used for that purpose, or protected data, such as that related to health, sexual orientation, gender identity, religion, etc.

It will be imperative to monitor any integration of AI closely to ensure that we identify and work diligently to eliminate (or, in the least, mitigate) bias or discrimination. Generative and other AI tools and programs must be subject to periodic testing to assess the accuracy of results.

Thank you again for allowing me to share my reflections on the future of AI in finance with you. I look forward to the panel’s presentations and your questions.

[1] Manny Cohen completed college and entered law school the year that Congress enacted the Securities Act of 1933 and the Securities Exchange Act of 1934. A few decades later, he joined the staff of the Securities Exchange Commission from private practice. Manny served for a decade as a staff attorney and later as Director of the Division of Corporate Finance and Chief Counsel of the Commission. After twenty-seven years at the SEC, in 1961, President John F. Kennedy named him a Commissioner. In 1961, President Lyndon B. Johnson elevated him to serve as Chairman of the SEC.

[2] Kristin N. Johnson & Carla L. Reyes, Exploring the Implications of Artificial Intelligence, 8 J. Int'l & Comp. L. 315, 315 (2021).

[3] Johnson & Reyes, supra note 1, at 321–22 (quoting Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 404 (2017); Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. L. Rev. 1305, 1310 (2019); Michael Simon et. al., Lola v. Skadden and the Automation of the Legal Profession, 20 Yale J. L. & Tech. 234, 254 (2018); Michael L Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Penn. L. Rev. 871, 880 (2015)).

[4] For a discussion of the concerns regarding the use of predictive data analytics in investment advising, see Press Release, SEC, SEC Proposes New Requirements to Address Risks to Investors from Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (July 26, 2023), and the accompanying proposed rule.

[5] Joint Statement, Bd. of Governors of the Fed. Rsvr. Sys., Fed. Deposit Ins. Corp., Fin. Crimes Enf’t Network, Nat’l Credit Union, Admin. Off. of the Comptroller of the Currency, Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing (December 3, 2018),

[6] David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey & Mariano-Florentino Cuéllar, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Feb. 2020) (report to the Admin. Conf. of the U.S.),; Cary Coglianese, A Framework for Governmental Use of Machine Learning (Dec. 8, 2020) (report to the Admin. Conf. of the U.S.),

[7] Coglianese, supra note 6, at 31.

[8] Id.

[9] Engstrom et. al, supra note 6, at 25.

[10] Id.

[11] Id. at 26.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Id. at 27.

[19] Quinn Owen, How AI Can Fuel Financial Scams Online, ABC News (Oct. 11, 2023),

[20] Thomas Brewster, Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find, Forbes (Oct 14, 2021),

[21] Henry Engler, Synthetic Identity Fraud Worrying U.S. Regulators, Reuters (Nov. 24, 2020),

[22] Nicholas Shevelyov, How Technologists Can Translate Cybersecurity Risks into Business Acumen, Forbes (May 15, 2020),

[23] CFTC & SEC, Findings Regarding the Events of May 6, 2010, at 1 (Sept. 30, 2010),; see also What Caused the flash crash? One big, bad trade, The Economist (Oct. 1, 2010),

[24] See, e.g., CFTC Release No. 7156-15, CFTC Charges U.K. Resident Navinder Singh Sarao and His Company Nav Sarao Futures Limited PLC with Price Manipulation and Spoofing (Apr. 21, 2015),

[25] Lindsay Whipp & Kara Scannell, ‘Flash-crash’ trader Navinder Sarao pleads guilty to spoofing, Fin. Times (Nov. 9, 2016),

[26] Kristin N. Johnson, Regulating Innovation: High Frequency Trading in Dark Pools, 42 J. Corp. L. 833 (2017).

[27] White House Off. of Sci. & Tech. Pol’y, Blueprint for an AI Bill of Rights (2022),

[28] Id. (noting “Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”).

[29] Remarks by President Biden on Seizing the Opportunities and Managing the Risks of Artificial Intelligence (June 20, 2023),

[30] Id.

[31] Karoun Demirjian, Schumer Lays Out Process to Tackle A.I., Without Endorsing Specific Plans, N.Y. Times (June 21, 2023),

[32] Press Release from the European Parliament, EU AI Act: First Regulation on Artificial Intelligence (June 8, 2023),

[33] Kristin N. Johnson, Commissioner, CFTC, Keynote Remarks at Rice University’s Baker Institute for Public Policy Annual Energy Summit, Credibility, Integrity, Visibility: The CFTC’s Role in the Oversight of Carbon Offset Markets (Oct. 5, 2023),

[34] Press Release, CFTC Selects Nakamoto Terminal as Winner of Agency’s First Science Prize Competition (Nov. 17, 2020),

[35] Kristin N. Johnson, Regulating Innovation: High Frequency Trading in Dark Pools, 42 J. Corp. L. 833 (2017).