Public Statements & Remarks

Speech of Commissioner Kristin Johnson: Building A Regulatory Framework for AI in Financial Markets

Regulating AI in Financial Markets

February 23, 2024

I would like to thank the New York City Bar Association for the kind invitation to join you for today’s exciting program. Thank you to the organizers of the Emerging Technology Symposium and co-chairs Adele Hogan and Jerome Walker. I am also thankful to my staff Rebecca Lewis, Julia Welch, and Tamika Bent for their assistance preparing my draft remarks today. While my remarks are my own, I am hopeful that focusing on the implications of introducing Artificial Intelligence (AI) in financial markets will be of interest.

Later in my remarks, I will introduce three policy interventions that I strongly urge the Commodity Futures Trading Commission (CFTC or Commission) to consider adopting: 1) a survey of market participants’ use of AI; 2) the introduction of heightened civil monetary penalties by actors who abuse AI technologies to engage in fraud, market manipulation, or otherwise disrupt the integrity of our markets; and 3) a collaborative inter-agency initiative intended to harmonize AI regulation across financial markets.

Establishing A Regulatory Framework for AI

Last fall, I was invited to deliver the Manuel F. Cohen endowed lecture at the George Washington University Law School. For those familiar, Manny Cohen served as Chairman of the Securities Exchange Commission from 1964 – 1969. Manny is remembered for advancing the legal theory supporting insider trading jurisprudence. The technologies that financial market regulators navigate have changed significantly in the ensuing decades.

During the lecture, I noted the assemblage of technologies that we describe colloquially as AI offer tremendous promise for our society—for example, faster and more accurate disease diagnosis, empowering doctors and their patients to wage more successful battles through early detection against endemic diseases such as breast cancer or prostate cancer. During the Covid-19 pandemic, epidemiologists were able to offer more efficient disease mapping, enabling them to predict and identify hotspots and disseminate information on county and state online dashboards to alert citizens and businesses regarding disease penetration and proliferation.

This promise is not, however, without limitations. In fact, the adoption and integration of AI in countries around the world has “engendered an international discourse that presents staggering questions.”[1]

Acknowledging both the promise and peril of AI, the White House issued an Executive Order (the “Executive Order”) encouraging independent regulatory agencies to consider “using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability.”[2]

Providing a few potential regulatory actions, the White House encourages regulators to use “rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI.”[3] This may include “clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.”[4]

As a market regulator at the CFTC, I am guided by the principles articulated in the Executive Order. Today, I’d like to focus on a few AI use cases. Let’s consider both regulatory use cases and market use cases.

Today, I also issue a call to action and suggest several policy interventions that will foster the safe deployment and use of AI. I believe that deterrence can and should play a central role in our AI regulatory framework. As a result, I would encourage the adoption of regulation that imposes heightened penalties on bad actors who use AI to engage in fraud, market manipulation, or other prohibited forms of conduct.

In addition, standing alongside a recent White House announcement that introduces a national AI Safety Institute, I believe that prudential and market regulators must come together and collaborate to ensure that the regulation governing AI in our financial markets is consistent with our long-standing commitment to ensure the integrity and stability of markets and to protect the most vulnerable customers in our markets.

I believe that regulators must participate in a coordinated effort to survey and inventory the uses of AI across financial markets, evaluate and assess the correlations that may exist and create risk management (particularly systemic risk) concerns, and articulate a set of uniform standards that may serve as a regulatory reference for national standard setting authorities such as the National Institute of Science and Technology and members of Congress seeking to craft legislation that regulates the adoption of AI across diverse sectors of our economy.

Defining AI

There is no single definition of AI across our markets or across jurisdictions. When regulating, it is important to understand the contours of the subject of regulation.

Artificial intelligence has been broadly defined as the “application of computational tools to address tasks traditionally requiring human sophistication” and, in this sense, it has existed “for many years.”[5]

In 2020, Congress adopted a statutory definition of AI.[6] The Executive Order leverages this statutory definition, and defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”[7] This definition further provides that “AI systems use machine and human-based inputs to—(A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.”[8]

In our limited time, it may be difficult to explore the universe of AI use cases, but thinking about supervisory use cases and market use cases offers a useful point of departure for starting a discussion regarding regulation.

Machine learning algorithms enjoy increasingly widespread use in trading markets, prompting me to think that we might benefit from discussion of a few case studies.

AI Use Cases

In Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, a report published by the Administrative Conference of the United States with lead authors David Freeman Engstrom, Stanford University; Daniel E. Ho, Stanford University; Catherine M. Sharkey, New York University; and Mariano-Florentino Cuéllar, Stanford University and former associate justice of the Supreme Court of California, notes that:

[AI] promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.[9]

Financial market regulators are increasingly integrating more complex uses of sophisticated technologies for market supervision.

Algorithms may enforce agency regulations by “shrink[ing] the haystack” of potential violators to better allocate scarce resources and assist the agency in balancing prosecutorial discretion with accountability.[10]

According to the same report, financial market regulators’ algorithmic enforcement enables more accurate detection of fraud in accounting and financial reporting, trading-based market misconduct, insider trading, and unlawful investment advisors; these results are handed off to human enforcement staff who continue to work the cases.[11]

The CFTC has on staff surveillance analysts, forensic economists, and futures trading investigators, each of whom identify and investigate potential violations. These groups use supervisory technology (SupTech) in support of their work. Over the past few years, the CFTC has transitioned much of its data intake and data analysis to a cloud-based architecture. This increases the flexibility and reliability of our data systems and allows us to scale them as necessary. This transition will allow the Commission to store, analyze, and ingest this data more cost-effectively and efficiently.[12]

Market and prudential regulators are beginning to develop rules to govern the use of AI in financial markets. Alongside market and prudential regulators’ use of AI, our regulatory framework obligates certain market participants to actively supervise their platforms and police violations of rules and conduct obligations. These entities operate as self-regulatory organizations (SROs) or designated self-regulatory organizations (DSROs).

Consequently, beyond regulators’ use of supervisory technologies, it is important to explore DSRO and SRO uses of novel technologies.

It is imperative that regulators also better understand market participants’ use of AI and other similar sophisticated technologies. It is also important that we develop dynamic rules that offer the flexibility to address the evolving nature of AI technologies and the diversity of risks that may emerge as a result of adopting these technologies.

SROs have integrated data-driven tools that may rely on AI to increase efficiency in, enhance analysis, and improve customer operations. In some instances, SROs and DSROs may use AI for regulatory purposes such as digitizing, reviewing, and interpreting new and existing regulatory intelligence.

Limitations of AI

While AI offers significant promise, AI technologies may also introduce diverse challenges. I will highlight three here: fraud and market manipulation, bias and discrimination, and privacy and data protection.

Fraud and market manipulation

As I noted in the Cohen Lecture last year, AI-enabled fraud is a significant concern:

A few years ago, a branch manager of a Japanese company in Hong Kong received a call from the director of his parent business. The instructions delivered during the call indicated that, in connection with a pending acquisition, the bank employee should transfer $35 million to a designated account. Having received emails confirming the legitimacy of the transaction, and because he spoke often enough with the director by phone on business matters to recognize his voice, the branch manager kindly obliged, followed the instructions, and transferred the funds.

This Ocean’s Eleven-styled operation by fraudsters cloning familiar voices to facilitate a heist is concerning. Couple the possibility of increasingly precise voice-cloning with the increasing difficulty of distinguishing deep fake videos to propagate misinformation, and there is tremendous potential for consumer harm, market disruption, and far more troubling adaptations.

Closer to home, state and federal regulators are warning of synthetic fraud—the creation of fraud scams that aggregate accurate information regarding bank account holders or loan applicants with fabricated data that makes it exceptionally difficult for financial institutions to detect the fraud.[13]

The CFTC has on staff surveillance analysts, forensic economists, and futures trading investigators, each of whom identify and investigate potential violations. AI, however, presents new challenges in prosecuting fraud and manipulation, particularly as it concerns providing proof of intent as required by anti-fraud statutes.

Fraudsters who seek to engage in fraud and manipulation of both spot and derivatives transactions may integrate AI into diverse strategies to harness AI for nefarious purposes.

Consequently, regulators will need to develop effective strategies to police markets in this new technology-driven era.

A Proposal: Three Key Proposals for Regulating AI in Financial Markets

In response to these challenges, Congress, industry trade associations, and state and federal regulators have offered several policy responses.

Application of existing laws and frameworks

To address certain challenges attendant to the adoption and use of AI, agencies are applying existing regulations to this novel technology, particularly in the context of bias and discrimination. Laws and regulations that protect civil rights and promote our democratic values must be vigorously enforced regardless of the technology.

In October 2022, the White House Office of Science and Technology Policy published The Blueprint for an AI Bill of Rights (“AI Bill of Rights”) to “support the development of policies and practices that protect civil rights and promote democratic values” in the development of artificial intelligence systems.[14] The White House engaged in significant collaboration with the public through panel discussions, public listening sessions, requests for information, and other informal means of engagement prior to publication to help guide the drafting of the AI Bill of Rights.[15] Though it is non-binding, the AI Bill of Rights offers guidance to the American public on how to safely and responsibly deploy AI.[16] The AI Bill of Rights outlines five key principles to guide the use and regulation of AI.[17] These principles involve: (1) protection from unsafe and ineffective systems; (2) avoiding algorithmic discrimination; (3) data privacy protection; (4) notice and explanation and (5) opportunities to opt out and access to humans who can consider and remedy problems.[18]

The Algorithmic Accountability Act of 2023 was reintroduced to provide the FTC new authority to create protections for people affected by AI generating decisions affecting housing, credit, education, and other high-impact uses.[19] The Act requires companies to conduct impact assessment of the decision processes and creates a public repository at the FTC, where consumers can review the decisions that have been automated by companies.[20]

The Consumer Financial Protection Bureau (CFPB) is also taking action to address privacy concerns, including the privacy concerns raised by AI. On October 19, 2023, the CFPB released a Notice of Proposed Rulemaking (NPRM) on “Personal Financial Data Rights Rule” (the “Proposed Rule”), which aims to strengthen consumers’ access and control over their personal financial data. At a high level, the Proposed Rule intends to provide consumers with the right to access their personal financial data and to share that data with other financial services providers.[21]

Commissioner Johnson’s Proposed Solutions

Developing an AI regulatory framework for our markets is among my highest priorities. I suggest at least the following steps as we begin to outline regulation for derivatives markets. 

Gathering information

As a first step, we need to gather more information. Last month, the CFTC released a Request for Comment (RFC) on the use of Artificial Intelligence in CFTC-regulated markets.[22] The RFC represents an essential effort to advance inquiries regarding the integration of AI in our markets and to explore the need to introduce guardrails to mitigate the risks that AI technologies may present.

The comments that we will receive in response to the RFC will enable us to evaluate the need for any formal guidance and possibly rulemakings regarding the integration of AI in CFTC-regulated markets. My office was deeply engaged in the process for developing the RFC, and I am grateful that the CFTC staff incorporated the many substantive comments from my office.

The RFC addresses many essential issues, seeking comment on, for example,

  • whether the adoption of AI may impede enforcement of antifraud and market manipulation regulations,
  • the policies and practices adopted to prevent the use of AI-driven strategies in schemes designed to manipulate the market, and
  • efforts to use AI-based market supervisory technologies to detect market manipulation or fraud.[23]

The RFC also asks important questions regarding bias in data sets and algorithms.[24] The RFC additionally recognizes governance concerns, asking for further information on how CFTC-regulated entities are modifying governance structures in response to AI.[25]

A principles-based policy framework

It is not possible to predict what technological developments will exist in the future. Such machine-learning driven algorithmic trading itself raises various policy concerns. However, confining regulatory perspectives to machine-learning driven algorithmic trading alone is too constraining in an ever-evolving technological mosaic. To ensure that technology has scope to improve and develop, it is important to formulate a policy framework generally applicable to trading technology, regardless of form.

In consultation with members of a working group of the Market Risk Advisory committee, I am looking forward to exploring the following principles and their role in our principles based regulatory framework.

Intelligibility—First and fundamentally, are individuals at the organizations able to understand, enunciate, and replicate the trading outcomes of any technology used? This requires the ability to identify inputs to any trading model and understand the derivation of outputs.

Risk—What risk management is in place with respect to the technology? On a macro level, are there risks to the broader financial markets from technology used unilaterally or in combination with firms using similar technologies? To this extent, we are reminded of the “Flash Crash” where, within minutes on a trading day, “major equity indices in both the futures and securities markets . . . suddenly plummeted . . . 5–6%” and then rebounded.[26] A lesson derived by the subsequent CFTC-SEC joint staff report on the Crash was that the automated execution of large orders “can trigger extreme price movements” and that “interaction between automated execution programs and algorithmic trading strategies can quickly erode liquidity and result in disorderly markets.”[27]

Compliance—Related to intelligibility is ensuring that technology results in trading behavior that is compliant with market requirements. For example, if an algorithmic trading program with a learning mechanism recognizes correlations between spoofing and higher returns, what protections are in place to prevent it from implementing a trading strategy integrating such conduct? This concern applies, mutatis mutandis, to everything from “banging the close” to trading ahead of a customer orderindeed, to any fraud or market manipulation devisable.

Oversight—Does management have sufficient functional understanding to acquit supervisory responsibilities?

Market Responsibility—Has the cost of or access to critical technology become such that one or a small number of participants dominate markets?

Notice and explainability—these issues are particularly likely to come up in the context of self-regulatory organizations (SROs). Where SROs use AI to oversee members, make enforcement decisions, and determine obligations such as margin requirements, it will be important to consider how to ensure that affected parties have sufficient notice and understanding of the reasons underlying decisions being made.

These principles are by no means exhaustive. At best, it is the beginning of an outline for a framework for which public input, including of participants in CFTC trading markets, is enthusiastically encouraged. With public dialogue, the opportunity exists to together create a framework that simultaneously serves two goals.

The first is protecting the integrity of the trading markets so that they fairly serve the interests of participants and the larger public. The second is welcoming and encouraging the development and application of the newest technologies with responsible guardrails. In this way, we can ensure that these technologies help assure that the United States financial markets remain leaders in financial innovation in the years ahead.

Heightened penalties

As a CFTC Commissioner, I am also deeply concerned about the potential for abuse of AI technologies to facilitate fraud in our markets. As we examine the development of and limitations on the legitimate uses of AI in our markets, it is also important for the CFTC to emphasize that any misuse of these technologies will draw sharp penalties.

In fact, I am calling for the Commission to consider introducing heightened penalties for those who intentionally use AI technologies to engage in fraud, market manipulation, or the evasion of our regulations.

In many instances, our statutes provide for heightened civil monetary penalties where appropriate.

I propose that the use of AI in our markets to commit fraud and other violations of our regulations may, in certain circumstances, warrants a heightened civil monetary penalty.

Bad actors who would use AI to violate our rules must be put on notice and sufficiently deterred from using AI as a weapon to engage in fraud, market manipulation, or to otherwise disrupt the operations or integrity of our markets. We must make it clear that the lure of using AI to engage in new malicious schemes will not be worth the cost.

Recommendation for an inter-agency task force

At the end of last year, the Biden Administration announced the creation of an AI Safety Institute, housed within the Commerce Department, which will be established within NIST, the National Institute of Standards and Technology.[28] NIST was founded in 1901 and is one of the nation’s oldest physical science laboratories.[29] Congress established NIST to “promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology.”[30]

The AI Institute will “operationalize” NIST’s existing AI Risk Management Framework to “create guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk.”[31] These guidelines will include “technical guidance that will be used by regulators considering rulemaking and enforcement on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI.”[32]

To support the AI Institute, NIST announced this month the creation of the AI Safety Institute Consortium, which will gather together hundreds of stakeholder organizations across the public and private sectors to “develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world.”[33] The Consortium members will contribute work in areas such as developing guidelines for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm and developing tools, methods, protocols for the testing and development of AI, among several other planned workstreams. [34]

In support of the work of the Consortium, I would like to propose the creation of an inter-agency task force composed of financial regulators including the CFTC, SEC, Federal Reserve System, OCC, CFPB, FDIC, FHFA, and NCUA.

The task force would support the AI Safety Institute in developing guidelines, tools, benchmarks, and best practices for the use and regulation of AI in the financial services industry. The task force should have a mandate to provide recommendations to the AI Safety Institute as well as evaluate proposals coming out of the Institute.

Addressing the perils of AI, while harnessing its promise, is a challenge that will require a whole-of-government approach, with regulators working together across diverse agencies. Through the task force I believe that financial regulators will aid the Institute in its critical mission: providing their essential experience and expertise to help guide the development of AI standards for the financial industry.


I want to thank you all for your time and attention today. We started by trying to better understand what AI is, and how it is being used in financial markets and more broadly. We then looked at the problems that these applications pose and then turned to some of the most important work we can be doing right now—considering potential responses to these urgent and emerging challenges.


[1] Kristin N. Johnson, Commissioner, U.S. Commodity Futures Trading Comm’n, Manuel F. Cohen Endowed Lecture at George Washington University Law School, (Oct. 12, 2023), available at

[2] Exec. Order No. 14,110, 88 Fed. Reg. 75,191 (Oct. 30, 2023).

[3] Id.

[4] Id.

[5] Fin. Stability Bd., Artificial Intelligence and Machine Learning in Financial Services Market Developments and Financial Stability Implications 3 (Nov. 1, 2017),

[6] 15 U.S.C. 9401(3). The National Artificial Intelligence Act of 2020 became law as part of a broader bill covering national defense matters in 2021 and codified a definition of artificial intelligence. This definition was promulgated as part of the National Artificial Intelligence Initiative. The purpose of the Initiative is to: “(1) ensure continued United States leadership in artificial intelligence research and development; (2) lead the world in the development and use of trustworthy artificial intelligence systems in the public and private sectors; (3) maximize the benefits of artificial intelligence systems for all American people; and (4) prepare the present and future United States workforce for the integration of artificial intelligence systems across all sectors of the economy and society.” 15 U.S.C. § 9411.

[7] Id.

[8] Id.

[9] David Freeman Engstrom et. al., Rep. Submitted to the Admin. Conf. of the United States, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies 6 (Feb. 2020),

[10] Id. at 22.

[11] Id. at 23­–24.

[12] Manuel F. Cohen Lecture, supra note 1.

[13] Id.

[14] The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House Off. of Sci. & Tech. Pol’y 2 (2022),

[15] Id. at 55­–62.

[16] Id.

[17] Id. at 14.

[18] Id. at 15–52.

[19] S. 2892, 118th Cong. (2023-2024).

[20] Id.

[21] Rohit Chopra, Director, Consumer Fin. Prot. Bureau, Prepared Remarks of CFPB Director Rohit Chopra on the Proposed Personal Financial Data Rights Rule (Oct. 19, 2023),

[22] U.S. Commodity Futures Trading Comm’n, Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Jan. 25, 2024),

[23] Id.

[24] Id. at 10.

[25] Id. at 8–9.

[26] U.S. Securities & Exchange Comm’n & U.S. Commodity Futures Trading Comm’n, Findings Regarding the Market Events of May 6, 2010: Report of the Staffs of the CFTC and SEC to the Joint Advisory Committee on Emerging Regulatory Issues 1 (Sept. 30, 2010),

[27] Id. at 6.

[28] Press Release, FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence, White House (Nov. 1, 2023),

[29] About NIST,, (last visited Feb. 23, 2024).

[30] Id.

[31] Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence, supra note 29.

[32] Id.

[33] U.S. Artificial Intelligence Safety Institute,, (last visited Feb. 23, 2024).

[34] Id.