Public Statements & Remarks

Commissioner Kristin N. Johnson Statement on the CFTC RFC on AI: Building a Regulatory Framework for AI in Financial Markets

January 25, 2024

Introduction

The increasing integration of artificial intelligence (AI) in nearly every sector of our economy and society has spurred a global debate regarding the promise and the peril of the assemblage of technologies described as AI. Registrants and other market participants are increasingly exploring and using AI and related technologies. Today, staff of the Commodity Futures Trading Commission (Commission or CFTC) published a request for comment (RFC), seeking public comment on the use of artificial intelligence (AI) in markets regulated by the Commission.[1]

The request for comment reflects an engaged dialogue among my staff and staff from each of the following divisions: Market Participant Division, Division of Clearing and Risk, Division of Market Oversight, and the Division of Data. AI’s rapidly expanding influence and prevalence necessitates careful evaluation and may require the introduction of new regulatory tools or methodologies. I greatly appreciate that the staff incorporated the many substantive comments from my office throughout the RFC.

Throughout my time in academia and in government service, I have focused on the potential and the perils of AI and diverse technologies, including a number of technologies introduced in the financial services sector. I have had the privilege of researching, publishing, and testifying before Congress regarding the implications of emerging innovative technologies, including distributed digital ledger technologies that enable the creation of digital assets as well as AI technologies.[2] A few months ago, I delivered the Manuel F. Cohen endowed lecture at George Washington University Law School and my speech focused exclusively on the potential benefits of and concerns related to integrating AI in financial markets.[3]

I strongly support the Commission’s efforts to advance inquiries regarding the integration of AI in our markets and to explore the need to introduce guardrails to mitigate the risks that AI technologies may present.

As the RFC notes, there are potential benefits to developing and deploying AI in derivatives markets, but there are also notable risks, including risks relating to market safety, customer protection, governance, data privacy, mitigation of bias, and cybersecurity, among other issues.

I applaud the staff’s efforts to improve the Commission’s understanding of the use of AI technologies by market participants as well as the potential for the Commission to rely on AI in conducting supervisory oversight. The comments that the Commission will receive in response to the RFC will enable the Commission to evaluate the need for any formal guidance and possibly rulemakings regarding the integration of AI in CFTC-regulated markets.

In light of the significant and potentially transformative changes of AI in our markets, policymakers and regulators cannot stand still. We must monitor the development and deployment of AI—and, where necessary, we must act to counter the new challenges that AI will create.

The State of AI Development and Deployment

As I recently noted in the Cohen Lecture:

At the most general level, the term AI refers to “a set of techniques aimed at approximating some aspect of human or animal cognition” relying on a system of algorithms to simulate human learning and a machine to execute the correlated activity. Aside from this general sketch, a generally agreed-upon definition of AI remains elusive. Instead, the term Al refers to a large set of information or computer sciences. . . . Professor Harry Surden points out, “[h]owever, AI is truly an interdisciplinary enterprise that incorporates ideas, techniques, and researchers from multiple fields, including statistics, linguistics, robotics, electrical engineering, mathematics, neuroscience, economics, logic, and philosophy, to name just a few.” Irrespective of the definitional difficulty surrounding the term “AI,” recent advances in computer processing speed, algorithms, and the rise of big data have made machine learning the most popularly known AI technique. . . . In machine learning, computers compute data using an algorithm to perform an assigned objective function, make predictions, and automate certain tasks. [4]

While markets continue to explore use cases for generative AI, there is an indisputable and remarkable increase in the adoption of AI.

In 2023 McKinsey conducted a global survey on the state of AI, with a focus on generative AI. The survey concluded that “[l]ess than a year after many of these tools debuted, one-third of our survey respondents say their organizations are using gen AI regularly in at least one business function… and more than one-quarter of respondents from companies using AI say gen AI is already on their boards’ agendas.”[5] According to that same survey, “40 percent of respondents [said] their organizations will increase their investment in AI overall because of advances in gen AI.” [6]

Domestic and International Efforts to Regulate AI

In light of this increased adoption of AI, it is essential that policymakers across the globe work to harness the potential of AI and address related risks. In October 2023, the UN launched an AI Advisory Body to examine the “risks, opportunities and international governance” of AI technologies.[7] The Advisory Body “is expected to make recommendations by the end of the year on the areas of international governance of AI, shared understanding of risks and challenges, and key opportunities.”[8] In December, the European Parliament and European Council reached a provisional agreement on the Artificial Intelligence Act.[9] If approved, the AI Act will become the world’s first law attempting to regulate and restrict the use of AI.

Closer to home, these issues are also receiving significant attention. At a Senate hearing on Artificial Intelligence in Financial Services in September 2023, Senator Sherrod Brown noted leaders’ “responsibility to ensure this technology is used—when it is used at all—to protect consumers and savers, while promoting a fair and transparent economy that works for middle-class Americans—rather than taking advantage of them.” [10] He further argued that “[a]t a minimum, the rules that apply to the rest of our financial system should apply to these new technologies.”[11]

In July, the SEC released a proposed rule to address conflicts of interest arising from broker-dealers’ and investment advisors’ use of predictive technologies in their interactions with investors. It noted that such technologies “can bring potential benefits for firms and investors. . . [but]they also raise the potential for conflicts of interest associated with the use of these technologies to cause harm to investors more broadly than before.”[12] The proposed rule requires that “[d]ue to the inherent complexity and opacity of these technologies as well as their potential for scaling, . . . such conflicts of interest should be eliminated or their effects should be neutralized, rather than handled by other methods of addressing the conflicts, such as through disclosure and consent.”[13]

The White House has released a Blueprint for an AI Bill of Rights, and this past October, President Biden issued an ambitious executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October Executive Order). In the October Executive Order, President Biden recognized the urgency of “governing the development and use of AI safely and responsibly,” which he noted will require “a society-wide effort that includes government, the private sector, academia, and civil society.”[14] President Biden went on to encourage agencies such as the CFTC to “consider using their full range of authorities to protect American consumers from fraud, discrimination and threats to privacy and to address other risks that may arise from the use of AI. . .  .”[15]

Private actors have also been putting forward recommendations. For example, guidelines for Salesforce’s development of generative AI prioritize: accuracy (“verifiable results that balance accuracy, precision, and recall”), safety (“make every effort to mitigate bias, toxicity, and harmful output”), honesty (“respect data provenance and ensure that we have consent to use data”), empowerment (“identify the appropriate balance” of fully automated processes and processes requiring human judgment), and sustainability (“develop right-sized models where possible to reduce our carbon footprint”).[16]

Many of these efforts point toward similar principles for the regulation and governance of AI. The White House Blueprint for an AI Bill of Rights emphasizes five key principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback.[17] Similarly, the European Council and European Parliament’s provisional agreement on the AI Act designates AI systems in the insurance and banking sectors as “high risk” and therefore subject to requirements including a fundamental rights impact assessment and a right of citizens to receive explanations about decisions based on such AI systems.[18]

Integration of AI in Financial Markets

Applications of AI in financial markets are only increasing—government regulators, self-regulatory organizations, and market participants are all adopting AI-based technologies, and looking into additional applications for the future.

Regulatory Surveillance

Just five years ago, a report from the Organization for Economic Co-operation and Development (OECD) found that of governments that had promulgated an official AI strategy, only four had a dedicated strategy for public sector AI use (with most others having public sector AI use embedded in the broader private sector uses).[19] It is clear, though, that in the U.S., the public sector has been keenly focused on how to start using AI. In a 2020 Administrative Conference to the United States (ACUS) report delivered by Stanford University and New York University researchers, the authors found that nearly half of the federal agencies studied (45%) had experimented with AI and related machine learning tools.[20] The use of AI in regulatory surveillance applications is but one avenue being explored by federal agencies.

As I noted in Cohen Lecture:

The CFTC has on staff surveillance analysts, forensic economists, and futures trading investigators, each of whom identify and investigate potential violations. These groups use supervisory technology (SupTech) in support of their work. Over the past few years, the CFTC has transitioned much of its data intake and data analysis to a cloud-based architecture. This increases the flexibility and reliability of our data systems and allows us to scale them as necessary. This transition will allow the Commission to store, analyze, and ingest this data more cost-effectively and efficiently.[21]

Because the CFTC is able to obtain and aggregate data across markets and products, we can develop a more complete picture [of the market]…. This allows us to detect potential misconduct or other market disruption more effectively and take appropriate action earlier. An increasing proportion of the cases brought by our Division of Enforcement are driven by data analytics rather than more traditional sources such as complaining customers, whistleblowers, or self-disclosure. Of course, data analytics also plays an important role in developing and prosecuting cases that do come to us through those historic avenues.[22]

Similarly, AI is being used at the SEC to “target[] fraud in accounting and financial reporting, …trading-based market misconduct, particularly insider trading, and … unlawful investment advisors and asset manager.”[23]

Compliance Monitoring

AI also has the potential to increase the efficiency of SRO and market participants’ compliance and monitoring efforts. As I noted in the Cohen Lecture:

The Financial Industry Regulatory Authority (FINRA) uses AI and other data-driven tools to monitor trades and promote market security. Market participants, including firms like futures commission merchants as well as exchanges and SROs, are using AI for regulatory purposes by digitizing, reviewing, and interpreting new and existing regulatory intelligence so that they can ensure they are compliant as an operator. There are a number of vendors who sell RegTech software to market participants to assist them in risk management, as well as compliance functions like detection of potential market abuse.[24]

Provision of Financial Services

Finally, market participants are adopting AI. An IOSCO report on the use of AI found that as early as 2021, market participants were using AI in “[a]dvisory and support services; [r]isk management; [c]lient identification and monitoring; [s]election of trading algorithm; and [a]sset management/[p]ortfolio management.” The report further found that asset managers were using AI to “[o]ptimise portfolio management; [c]omplement human investment decision-making processes by suggesting investment recommendations; and [i]mprove internal research capabilities, as well as back office functions.”[25]

The Commission’s Efforts to Understand the Uses and Risks of AI

Today’s release of the RFC is an important step that will enable the CFTC to better understand existing and emerging uses of AI, as well as existing and emerging risks. Such information will help us to shape the development and deployment of AI in CFTC-regulated markets in a manner that harnesses AI’s many promises, while responding to the many new challenges that will arise.

Uses of AI

It is essential that the CFTC understand how market actors are adopting, and will adopt, AI in the derivatives markets. Accordingly, the RFC begins with a number of questions regarding current and potential uses of AI in CFTC-regulated markets. In addition to broadly seeking information on current and future uses, the RFC seeks input on a number of specific, key items. Critically, the RFC asks respondents to weigh in on the proper definition of AI—how broad or narrow the definition should be, and how to draw the line between AI and other automated trading strategies currently in use.[26]

Risks of AI

AI also brings with it new risks, and exacerbates already-existing risks.

As recognized in an IOSCO report on the use of AI in financial markets, these risks include: “Governance and oversight; Algorithm development, testing and ongoing monitoring; Data quality and bias; Transparency and explainability; Outsourcing; and Ethical concerns.”[27] Understanding the role of third-parties in the development of AI for CFTC regulated entities is also essential, and I appreciate the incorporation of additional questions that seek even more granular detail regarding the use of third-parties and the risks created thereby.[28] I also appreciate the build-out of additional questions regarding key risks including bias in data, the need for explainability and transparency, concerns regarding market manipulation and fraud, and the addition of the question concerning harm to competition.[29]

The RFC seeks to better understand the challenges and concerns that the application of AI in CFTC-regulated markets raise. I want to highlight several.

Cybersecurity

Cyber threats are only likely to grow in prevalence as new AI technologies are developed and adopted by bad actors. Accordingly, the RFC requests information on the use of AI by market participants in addressing cyber threats.

Third-party development of AI

The RFC addresses the role of third-parties in developing AI technologies, requesting information on the use of third-parties to develop AI technologies in-house, as well as the acquisition of technologies from third-parties. This is a critical area in which to gather information, as the use of third-parties creates many concerns.

As the IOSCO report noted:

Regulators should require firms to have the adequate skills, expertise and experience to develop, test, deploy, monitor and oversee the controls over the AI and ML that the firm utilises. Compliance and risk management functions should be able to understand and challenge the algorithms that are produced and conduct due diligence on any third-party provider, including on the level of knowledge, expertise and experience present.[30]

Regulators should require firms to understand their reliance and manage their relationship with third-party providers, including monitoring their performance and conducting oversight. To ensure adequate accountability, firms should have a clear service level agreement and contract in place clarifying the scope of the outsourced functions and the responsibility of the service provider. This agreement should contain clear performance indicators and should also clearly determine rights and remedies for poor performance.[31]

Market Manipulation and Fraud

As the RFC notes, “Bad actors are increasingly able to use AI to engage in more sophisticated forms of fraud and illegal conduct.” AI creates the potential for increased market manipulation and fraud. The RFC seeks additional information about these risks, including seeking comment on whether the adoption of AI may impede enforcement of antifraud and market manipulation regulation and asking for details regarding efforts to use AI-based market supervisory technologies to detect market manipulation or fraud.

As I noted in connection with the CFTC’s filing of a complaint targeting a type of romance fraud known as “Pig Butchering” in which a criminal impersonates a potential romantic partner in order to defraud customers:

According to the Justice Department, in 2022, investment fraud caused the highest losses of any scam reported by the public to the Federal Bureau of Investigation’s (FBI) Internet Crimes Complaint Center, totaling $3.31 billion. Frauds involving cryptocurrency, including pig butchering, represented the majority of these scams, increasing a staggering 183% from $907 million in 2021 to $2.57 billion in reported losses by 2022. In total across the U.S., by the end of 2022, more than 46,000 people had reported losing money in crypto-related frauds…. As the FBI’s investigations in this area demonstrate, victims of pig butchering frauds are targeted and primed by scammers in such a way that they may be particularly exposed. In the case of the romance scam, a victim is chosen specifically because that individual has declared himself or herself vulnerable by hoping to meet and develop a meaningful romantic relationship. …Throughout my time as a Commissioner, I have emphasized the role regulators must play in protecting consumers.[32]

These problems will only increase as AI becomes more sophisticated, providing for trading strategies better able to evade market surveillance and facilitating more convincing scams through increasingly sophisticated “deep fake” videos and other content.

Bias and Discrimination

The RFC asks a number of questions addressing bias and discrimination in the use of AI, seeking information regarding the quality of data used to train AI systems and measures to address biases in data and algorithms.

As I noted in the Cohen lecture, there is a risk that “bias and discrimination in underlying data may be amplified through the use of generative AI. Facial recognition software may be helpful in certain law enforcement contexts”—but carries with it the potential to reinforce existing disparities, especially given known “limitations of such AI platforms to recognize or distinguish facial features of individuals with darker complexions and the established imbalanced representations of women of color in popular training data sets.”[33]

In consumer finance, credit-scoring AI models are already being used to determine who can access credit and at what price.[34] “There is a risk that automated programs, algorithms, etc. can introduce unintended bias as a consequence of the way they have been trained, or the datasets used to build out their knowledge base.”[35]

Privacy Rights

The RFC accordingly seeks information regarding risks to privacy rights and efforts being taken by market participants to protect privacy. AI also raises significant privacy concerns. We must ensure that “the data used by these tools, whether taken directly from regulated entities or given by them periodically, is stored securely and used only for its intended purpose.” As I have previously explained, it is imperative as a matter of basic consumer rights that we ensure the integration of AI does not hardwire discrimination prevalent in training data into emerging AI technologies.

New Governance Models

The RFC specifically recognizes governance concerns, asking for further information on how CFTC-regulated entities are modifying governance structures in response to AI.

To address these risks, it is likely that we will need to develop new governance models. As I have noted previously, the “use of automated tools at this point should be only one part of the toolkit.  “Al methods [are] vulnerable to underperforming values-centered analysis that focuses on principles such as equity, justice, transparency, and ethics.”[36] Governance that provides for human oversight of AI models, by those with the mandate to consider these values, will therefore be essential.

As the IOSCO report recommended,

Regulators should consider requiring firms to have designated senior management responsible for the oversight of the development, testing, deployment, monitoring and controls of AI and ML. This includes a documented internal governance framework, with clear lines of accountability. Senior Management should designate an appropriately senior individual (or groups of individuals), with the relevant skill set and knowledge to sign off on initial deployment and substantial updates of the technology.[37]

Conclusion

Roughly one month ago, the Market Risk Advisory Committee (MRAC or the Committee)—the advisory committee that I sponsor—held its third meeting of the year, at which the Committee took up the question of how MRAC’s Future of Finance subcommittee (FOF subcommittee) might address AI in 2024.

MRAC anticipates offering formal recommendations to the Commission on a number of related topics including the integration in our markets of generative AI, the relationship between AI and blockchain technology, and the risks (including systemic risks) presented by each of these new technologies.

The RFC released today is an important step towards that goal.


[1] The RFC was drafted by the Division of Clearing and Risk, Division of Data, Division of Market Oversight, Office of the General Counsel and Market Participants Division.

[2] Commissioner Kristin Johnson, Opening Statement on Measuring Benefits and Mitigating the Risks of Integrating Artificial Intelligence (July 18, 2023), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement071823; Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, Manuel F. Cohen Lecture, George Washington University Law School (Oct. 17, 2023); Kristin N. Johnson & Carla L. Reyes, Exploring the Implications of Artificial Intelligence, 8 J. Int'l & Comp. L. 315, 315 (2021); Kristin N. Johnson, Regulating Innovation: High Frequency Trading in Dark Pools, 42 J. Corp. L. 833 (2017); Kristin Johnson, Frank Pasquale, and Jennifer Chapman, Artificial Intelligence, Machine Learning, and Bias in Finance: Toward Responsible Innovation, 88 Fordham L. Rev. 499 (2019).

[3] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, Manuel F. Cohen Lecture, George Washington University Law School (Oct. 17, 2023).

[4] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, supra note 2;, citing Kristin N. Johnson & Carla L. Reyes, Exploring the Implications of Artificial Intelligence, 8 J. Int'l & Comp. L. 315, 315 (2021) at 321–22 (quoting Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 404 (2017); Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. L. Rev. 1305, 1310 (2019); Michael Simon et. al., Lola v. Skadden and the Automation of the Legal Profession, 20 Yale J. L. & Tech. 234, 254 (2018); Michael L Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Penn. L. Rev. 871, 880 (2015)).

[5] McKinsey & Company, The State of AI in 2023: Generative AI’s Breakout Year (Aug. 1, 2023), The state of AI in 2023: Generative AI’s breakout year | McKinsey.

[6] Id.

[7] UN, New UN Advisory Body Aims to Harness AI for the Common Good (Oct. 26, 2023), New UN Advisory Body aims to harness AI for the common good | UN News.

[9] European Parliament, Press Release, Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI (Dec. 12, 2023), Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | News | European Parliament (europa.eu).

[10] Senator Sherrod Brown, Opening Statement on Artificial Intelligence in Financial Services (Sept. 20, 2023), brown_statement_9-20-23.pdf (senate.gov)

[11] Id.

[12] 88 FR 53960 at 53962, 2023-16377.pdf (govinfo.gov)

[13] Id. 53967.

[14] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Sec. 1 (Oct. 30, 2023), at https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[15] Id. Sec. 8a(a). See also White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights (Oct. 2022) at Blueprint for an AI Bill of Rights | OSTP | The White House (providing guidance on the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public.)

[16] Paula Goldman & Kathy Baxter, Generative AI: 5 Guidelines for Responsible Development (Feb. 7, 2023), at Generative AI: 5 Guidelines for Responsible Development - Salesforce.

[17] White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, supra note 15.

[18] European Parliament, Press Release, Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, supra note 9.

[19] Berryhill, J., et al. (2019), "Hello, World: Artificial intelligence and its use in the public sector", at 73, OECD Working Papers on Public Governance, No. 36, OECD Publishing, Paris, https://doi.org/10.1787/726fd39d-en.

[20] David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey & Mariano-Florentino Cuéllar, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Feb. 2020) (report to the Admin. Conf. of the U.S.) at 6, https://www.acus.gov/sites/default/files/documents/Government%20by%20Algorithm.pdf.

[21] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, supra note 2.

[22] Id.

[23] David Freeman Engstrom, et al., Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies (Feb. 2020) supra note 20 at 23.

[24] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, supra note 2.

[25] OICU-IOSCO, Final Report, The use of artificial intelligence and machine learning by market intermediaries and asset managers (Sept. 2021) at 1, FR06/2021 The use of artificial intelligence and machine learning by market intermediaries and asset managers (iosco.org)

[26] I appreciate Division staff’s addition to the RFC of questions from my office on this point, including “Would defining AI more broadly or adopting a more narrowly tailored definition of AI be necessary for guidance or proposed rules applied to CFTC-regulated markets?” and “What criteria should be used to differentiate between AI and other forms of automated trading?”

[27] OICU-IOSCO, Final Report, The use of artificial intelligence and machine learning by market intermediaries and asset managers, supra note 25 at 1.

[28] Questions from my office added on this point include: “Are there any risks specifically associated with using AI technologies created by third party providers? What efforts are firms using AI technology from their-party service providers taking to understand and mitigate these risks? What due diligence procedures are in place to evaluate the risks posed by third-party providers prior to adopting third-party AI technologies? What disclosures should be required, both regulatory and to other market participants, regarding a firm’s use of third-party providers for AI services?”

[29] Questions from my office added on these points include: “How are biases that are reflected in historical data identified and addressed?”; “If SROs are using AI to oversee members, are there particular issues concerning explainability in the context of investigations and enforcement actions? If firms are using AI models to determine obligations or requirements for other parties, such as margin requirements, are there AI-specific transparency issues? Describe any potential transparency concerns that may arise as a result of SROs adopting AI technologies as part of their market oversight responsibilities.”; “Please also specifically comment on whether the adoption of AI may impede enforcement of antifraud and market manipulation regulations. For firms that integrate AI into trading decision-making, describe the policies and practices adopted to prevent the use of AI-driven strategies in schemes designed to manipulate the market. Describe efforts to use AI-based market supervisory technologies to detect market manipulation or fraud.”

[32] Commissioner Kristin N. Johnson, Statement Regarding CFTC Charges in “Pig Butchering” Case (Jan. 19, 2024), at https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement011924.

[33] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, supra note 2.

[34] Makada Henry-Nickie, How Artificial Intelligence Affects Financial Consumers, Brookings (Jan. 31, 2019), at https://www.brookings.edu/articles/how-artificial-intelligence-affects-financial-consumers/#:~:text=A%primary%objective%of%consumer,build%wealth%and%access%credit.

[35] Commissioner Kristin Johnson, Artificial Intelligence and the Future of Financial Markets, supra note 2.

[36] Id.

[37] OICU-IOSCO, Final Report, The use of artificial intelligence and machine learning by market intermediaries and asset managers, supra note 25 at 2.

-CFTC-