Public Statements & Remarks

Statement of Commissioner Kristin N. Johnson: Articulating an Agenda for Regulating AI

May 02, 2024

Introduction

Good afternoon. It’s a pleasure to be here for today’s Technology Advisory Committee (TAC) meeting.

Three Significant Steps in Developing CFTC AI Guidance and Regulation

The discussion on artificial intelligence (AI) today marks another  significant initiative launched at the Commodity Futures Trading Commission (CFTC or Commission) to assess the role of artificial intelligence in our markets.

CFTC RFC

First, in January of 2024, my staff worked closely with a task force of senior CFTC leaders and Division Directors across the Commission in the development of the Commission’s Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets.

Commissioner-Led Policy Development and CFTC Advisory Committee - MRAC Future of Finance Subcommittee Work Plan

Second, in a series of public statements this spring, I advanced three proposals: a principles-based framework to assess the risks of integrating certain AI technologies in our markets to ensure responsible use of AI; heightened penalties for conduct that intentionally misuses powerful AI technologies; and the creation of an interagency task that will evaluate, assess, and harmonize guidance, supervision, and regulation that addresses the increasing integration of AI in financial markets.

In addition, at March and April meetings, the Future of Finance Subcommittee of the Market Risk Advisory Committee (MRAC) that I sponsor advanced a working plan approved by the Subcommittee and presented to the MRAC that will explore the benefits of integrating a survey that solicits greater transparency into the use of AI by CFTC-registered entities. The survey would complement traditional supervisory examination inquiries.

CFTC Staff Engagement

Finally, yesterday, the Commission announced the appointment of the CFTC’s first chief AI officer.

In addition to the recommendations in the TAC report, I anticipate delivering a speech tomorrow at a fintech and blockchain conference. In the speech, I will offer further details regarding the policy recommendations that my office is advancing. Allow me to preview two suggestions here.

Assessing Market Risk and Ensuring Market Integrity in CFTC Markets

MRAC and its members are engaged in a robust discussion regarding the efficacy of existing regulation in light of the increasing adoption of AI in CFTC markets. I anticipate that the debate and discussion will lead to thoughtful, consensus-driven ideas that will lead to valuable regulatory contributions.

AI Fraud Task Force

Beyond the MRAC’s efforts, I believe that the Commission should create an AI Fraud Enforcement Task Force that will focus on careful oversight and supervision in our markets to ensure against misuses of AI technologies.

Deep-Dive Regulatory Roundtables

Many of you will recall well the years’ long effort to establish mandatory clearing where appropriate in the over-the-counter swaps market following the recent financial crisis. Joint CFTC-SEC roundtables were among the most valuable regulatory initiatives adopted during that period. The roundtables included diverse stakeholders including market participants, other financial market regulators, public interest advocates, and academics, among others. These Roundtables offered staff an opportunity to engage in deep dive and in-depth discussions regarding a market that had not previously been subject to direct SEC or CFTC regulatory oversight.

Finding A Path Forward

We enthusiastically welcome the TAC Subcommittee report to enhance the regulatory toolkit available to the Commission. As the report and each of these steps demonstrates—much work remains.

With each of these steps, however, the Commission reinforces its commitment to ensure the integrity and stability of our markets.

With each step the Commission demonstrates leadership among U.S. and global financial market regulators as we begin to better understand increasingly advanced forms of AI.

A bit more on a few of the steps may offer useful guidance.

Coordinating Efforts Across the Commission

I applaud today’s focus on developments in AI and the tireless efforts of the members of the TAC to implement TAC’s charter and objectives.

The TAC is one of our five CFTC advisory committees and its charter explains that the critical mission of TAC is to identify and understand “the impact and implications of technological innovation in the financial services, derivative and commodity markets.”[1]

The MRAC has been deeply engaged in understanding the risks engendered by the use and deployment of AI across derivatives markets and other related markets with a view to promoting “the integrity, resilience, and vibrancy of the U.S. derivatives markets through sound regulation.”[2] We are extremely thoughtful about systemic risk issues, the stability of the derivatives market, and evolving market structure. It is critical that the Commission understand and mitigate AI-related risks in the derivatives market, taking into account the interconnectedness of financial markets, the broader financial system, and any potential systemic risk concerns.

This is the very mission of the MRAC and the guiding principles of the Future of Finance Subcommittee, which I stood up earlier this year to advance solutions to address AI-related risks. During my tenure as a Commissioner, “conversations among global regulators, market participants, customers, and investors have reached a fever pitch.”[3] Innovations, particularly in generative AI have the potential to fundamentally reshape society, including the operations of financial markets. Generative AI offers many promising new innovations. At the same time, the risks of AI require the adoption of appropriate safeguards.

The Future of Finance Subcommittee went right to work and on March 15, 2024, live streamed its meeting with industry experts, regulators, and academics participating as panelists and invited guests. Through careful and extensive deliberation and engagement among Subcommittee members representing broad viewpoints, the Subcommittee adopted a two-part work plan.

FIRST, the Subcommittee seeks to offer a template for a Commission survey of CFTC registrants’ use of AI in CFTC-regulated markets. The design of the survey may leverage the questionnaires distributed as part of the annual staff examinations process.

SECOND, the Subcommittee may advance recommendations for guidance, advisories, or formal rulemakings on the Subcommittee’s assessment of the risk that conduct in CFTC regulated markets signals gaps in existing regulations and guidance.

Industry leaders have suggested that “the CFTC’s consideration of AI should be focused on the application of the technology within the context of a particular use case, not the technology as a whole.”[4]

It is my strong hope that there may be opportunities for coordination and collaboration between TAC and MRAC as we support the important work of the Commission.

The work of the advisory committees is incredibly important and may inform Commission action.

Greater Details on Commission-Led Proposals

At a recent FIA conference, I outlined three specific interventions to address AI.

First, I have called for the Commission to adopt a principles-based regulatory framework for addressing the increasing prevalence of AI-related risks in our markets.

Second, I have advocated for heightened penalties for those who deliberately misuse AI to engage in fraud or market manipulation.

Third, I have called for an inter-agency task force to consider the adoption of parallel, harmonized safeguards that will focus on ensuring the stability and integrity of our markets.

I would like to address these in a bit more detail today.

Principles-Based Regulatory Framework

The Commodity Exchange Act (CEA) is a principles-based statute, and the CFTC is a principles-based regulator. Registered entities, such as DCMs, SEFs, and DCOs are required to comply with core principles. Generally, a registered entity has reasonable discretion to establish the manner in which it complies with a particular core principle unless the Commission adopts more prescriptive requirements by rule or regulation. I believe the Commission’s approach to mitigating the risks associated with the use of AI in our markets should be principles-based, retaining adaptability and remaining technology neutral.

One aspect of this approach is to consider existing regulations and whether certain key AI-related risks are addressed by existing risk-management requirements.

For example, DCOs are required to have an enterprise risk-management program that identifies, measures, monitors, and manages sources of risk on an ongoing basis.[5]

As another example, swap dealers are required to implement a risk management program designed to monitor and manage the risks associated with the swaps activities of the swap dealer.[6]

Heightened penalties

AI raises new challenges; we must be prepared to respond to these challenges. AI may be used in a manner that makes certain well-known challenges—fraud and market manipulation—even more difficult to detect and identify.

To address these concerns, the Commission should introduce heightened penalties for those who intentionally use AI technologies to engage in fraud, market manipulation, or the evasion of our regulations. Bad actors who would use AI to violate our rules must be put on notice and sufficiently deterred from using AI as a weapon to engage in fraud, market manipulation, or to otherwise disrupt the operations or integrity of our markets.

To address the increased danger that AI poses, Deputy Attorney General Monaco announced that the Department of Justice would “seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI” and that the Department would seek reforms to enhancements where “existing sentencing enhancements don’t adequately address the harms caused by misuse of AI.”[7]

Such heightened penalties are also consistent with the CEA and the CFTC’s existing enforcement guidance. Under the CEA, penalties must relate to the gravity of the offense.[8] In applying this mandate, the CFTC’s enforcement manual notes that “the gravity of the violation is the primary consideration in determining the appropriate civil monetary penalty” and lists factors including the “[n]ature and scope of any consequences flowing from the violations,” “impact on market integrity [and] customer protection,” and impact on “the mission and priorities of the Commission in implementing the purposes of the CEA.”[9]

Thus, where AI is weaponized to increase the gravity of an offense, by increasing its scope, sophistication, or potential damage, heightened penalties may be appropriate.

The CFTC issued an enforcement advisory in October 2023, highlighting the importance of penalties that are “sufficiently high” to “achieve general and specific deterrence” so that entities do not “view[] penalties as a cost of doing business” and do not “view the potential rewards of misconduct as outweighing the potential risks.”[10] There are significant and growing opportunities for bad actors to misuse this emerging technology. A recent report from the Department of the Treasury on AI noted “an acceleration in the growth of synthetic identity fraud.” [11] The report recognized that “[t]he volume of these types of exploitations or cyber-enabled attacks is likely to rise as technological developments like Generative AI reduce the cost, complexity, and time required to leverage gaps in our digital infrastructure.”[12]

It is therefore essential that we calibrate the penalties rightly, so that it is clear that AI-enabled misconduct is not worth the risk.

Heightened penalties for deliberate misuse of AI to engage in fraud or misconduct is a crucial step. But it is not a cure-all. At present, AI is being used in CFTC markets in a number of different ways:[13]

  • Trading (e.g., market intelligence, robo-advisory, sentiment analysis, algorithmic trading, smart routing, and transactions)
  • Risk Management (e.g., margin and capital requirements, trade monitoring, fraud detection)
  • Risk Assessments and Hedging
  • Resource Optimization (e.g., energy and computer power)
  • RegTech – Applications that enhance or improve compliance and oversight activities (e.g., surveillance, reporting)
  • Compliance (e.g., identity and customer validation, anti-money laundering, regulatory reporting)
  • Books and Records (e.g., automated trade histories from voice / text)
  • Data Processing and Analytics
  • Cybersecurity and Resilience
  • Customer Service

Inter-agency task force

Our registrants may operate several businesses, may be dually-registered, or may be a part of a complex banking organization. AI-related risks may arise in various segments of their business. To continue to think holistically about the broader implications of AI-related risks and mitigate conflicting or inconsistent regulations, I have also proposed that the Commission lead in creating an inter-agency task force with market and prudential regulators including the CFTC, SEC, Federal Reserve System, OCC, CFPB, FDIC, FHFA, and NCUA.

The task force would focus on information sharing to identify AI-related risk across our financial system and support the AI Safety Institute (part of the National Institute of Standards and Technology), in developing guidelines, tools, benchmarks, and best practices for the use and regulation of AI in the financial services industry. It could also provide recommendations to the AI Safety Institute as well as evaluate proposals coming out of the Institute.

This proposal is consistent with the Biden Administration’s expectations articulated in the Executive Order on AI.[14] Other regulators are already engaging in this important work of coordination and collaboration. The Department of Justice is establishing “Justice AI” which “will convene individuals from across civil society, academia, science, and industry to draw on varied perspectives…to understand…how to ensure [to] accelerate AI’s potential for good while guarding against its risks.”[15]

The Department of the Treasury has “launched a public-private partnership dedicated to bolstering regulatory and private sector cooperation…. The [partnership] provides a forum for convening financial sector AI stakeholders across the member agencies of the Financial Stability Oversight Council (FSOC), the Financial and Banking Information Infrastructure Committee (FBIIC), and the Financial Services Sector Coordinating Council (FSSCC).”[16]

Such coordination has proved critical to the success of our regulatory efforts in the past, and it is only more critical now, as we face the unprecedented opportunities and challenges that AI brings.

Conclusion

I look forward to today’s discussion—the TAC is a valuable forum for addressing these critical issues, and I am hopeful that the conversations today will help further develop our understanding of AI’s benefits and challenges.

[4] FIA, CME, ICE, Response to Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Apr. 24, 2024).

[5] 17 C.F.R. § 39.10(d).

[6] 17 C.F.R. § 23.600.

[7] Id.

[8] 7 U.S.C. §§ 9a(1), 13a.

[9] Division of Enforcement, CFTC, Division of Enforcement Manual (accessed Apr. 28, 2024), Division of Enforcement | CFTC.

[10] CFTC, Enforcement Advisory on Penalties, Monitors and Admissions (Oct. 17, 2023), CFTC Releases Enforcement Advisory on Penalties, Monitors and Admissions | CFTC.

[11] Department of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (March 2024), Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (treasury.gov).

[12] Id.

[13] LabCFTC, A Primer on Artificial Intelligence in Financial Markets (Oct. 24, 2019), https://www.cftc.gov/media/2846/LabCFTC_PrimerArtificialIntelligence102119/download.

[14] Exec. Order No. 14,110, 88 Fed. Reg. 75,191 (Oct. 30, 2023), Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House.

[15] Lisa O. Monaco, Deputy Attorney General, Department of Justice, Remarks on the Promise and Peril of AI, supra.

[16] Department of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, supra.

-CFTC-