Public Statements & Remarks

Keynote Address of Chairman J. Christopher Giancarlo at Fintech Week, Georgetown University Law School

“Quantitative Regulation:  Effective Market Regulation in a Digital Era”

November 7, 2018

Thank you.  Let me begin by thanking Georgetown Law Professor and Director of the Institute of International Economic Law, Chris Brummer. In only a few short years under his leadership, the Georgetown Law Fintech Conference has become one of the true forums for thought leadership in the overcrowded industry of fintech conferences.

Thank you, Professor Brummer.  It is an honor to be invited to be here today.


This year, we celebrate an important, though somewhat forgotten, anniversary.  We are now seventy years from the creation in 1948 of the world’s initial stored program computer, the “Manchester Baby”.  The “Baby” was designed as a test platform for the first true random-access computer memory.  It was constructed in a separate building at the University of Manchester and housed in a room that was smaller than the one in which we meet today.

Clearly, so much has changed in the seven decades since the birth of the “Baby”.  The average modern laptop computer has 30 million times its processing speed.  The average contemporary smart phone has 500 million times its storage capacity.  Just in the past decade alone, we have witnessed technological advancements ranging from the internet’s permeation through all sectors of the economy to exponential increases in computing power to computer science breakthroughs in developing permission-less and largely autonomous economic networks.

So much change – the course of a single human lifetime.

So much of our world today – from information to education, from journalism to free speech, from music to manufacturing, from transportation to commerce, from travel to leisure, and from agriculture to nutrition – is undergoing a digital transformation.  You might call it the digitization of modern life.  Others call it the 4th industrial revolution,[1] a melding of science and technology with human existence and society.

That is why it is so good to be with you today.  This 2018 Georgetown Law Fintech conference is ever so timely.  The topics under consideration reflect a common realization that we are entering a new phase in human history, when exponential digital technologies are rapidly changing the very nature of human identity, work, leisure and community.

It is certainly no surprise to this audience that, just as our lives are being transformed, so the world’s trading markets are going through the same digital revolution from analog to digital, from human to algorithmic trading and from stand-alone centers to interconnected trading webs.  Emerging digital technologies are impacting trading markets and the entire financial landscape with far ranging implications for capital formation and risk transfer.  They include algorithm-based trading, “smart” contracts, Distributed Ledger Technology (DLT), and the topic that I want to discuss with you today: big data, automated data analysis, and artificial intelligence (AI).

Quantitative Regulation

The CFTC has been no bystander to the digitization of modern markets.  We have been closely engaged through three points of contact: our Technology Advisory Committee (TAC)[2], our market intelligence branch,[3] and LabCFTC, the first ever innovation and technology engagement initiative by a U.S. market regulatory agency that seeks to modernize both our external and internal approach to such developments.[4]

Yet, there is one area of the technology revolution that is central to all the others and where the CFTC must run harder to keep pace.  That is in the area of big data and its uses, including automated data analysis and machine learning and intelligence.  The CFTC must run faster because the amount of market data continues to grow exponentially and infinitely more granular, quantitative data analysis increasingly drives commercial trade execution and strategy, and limited agency funding requires increasing operational efficiency.

Indeed, as we move to a world of scalable data lakes – or perhaps even oceans – many market processes and decisions will become increasingly reliant on the ability to proficiently glean insights and take actions based on thoughtful data analysis.  For America to continue to provide a home to the most robust and well-regulated markets, our taxpayers rightly expect that we in government keep pace with digitization and execute our regulatory missions in the most effective and efficient ways possible.

Today, I will explain how the CFTC regulatory response to the increasing centrality of data in every aspect of contemporary market activity must be enhanced with up-to-date quantitative data analytics capability.  The CFTC and other modern market regulators must start the next phase of regulatory data collection, automated analysis and data-driven policy application.  We must pioneer a new frontier of quantitative regulation or “QuantReg.”  We must become a truly quantitative regulator.

The Centrality of Data

At least three common threads run through the digitization of modern financial and commodity derivatives markets:  first, the central role of data; second, the critical importance of automated data analysis to enhance efficiencies; and third, the introduction of state of the art machine learning and artificial intelligence to increase effectiveness.  All three of these threads must be woven into the fabric of a modern market regulator capable of being fit-for-purpose in our digital world.

Let’s unpack these three threads a bit further.  With respect to the centrality of data, trading market participants are now able to generate, consume, process, organize, and analyze such volumes of data such that the term “Big Data” no longer appears novel or even requisite to note.[5]  To put it simply, all activity in trading markets is being digitized and captured in bytes and bits – data is King.

All of this big data, however, is of little use to us unless it can be cleaned, organized, standardized, and made sense of.  Many efforts today are focused on these elements of data collection and processing – how can we take what has to-date been messy, unstructured data and convert it to a form that is consumable.  And perhaps even more importantly, how can we design future computing systems, databases, and networks that standardize data formats and fields and speak to each other in order to ensure rich, coherent, and complete data sets?[6]

A World of Automation

If data is King, then automating processes which previously required mindless and error-prone human effort is the critical work of the King’s Court.  Robotic process automation (RPA) refers to the application of computer technology in order to process relatively simple, repetitive tasks in a consistent and automated fashion.[7]  This type of automation frequently relies on the consumption and processing of standardized data, which can then result in a response less prone to human-error and less costly to implement.  Additionally, automating simple or low-value functions allows human resources to focus on higher-value efforts – essentially eliminating mindless paper-pushing and potentially boosting worker morale.

Many view automation as part of an evolutionary bridge to machines that can learn and generate autonomous insights from data, and in some instances the two innovations work in conjunction (more on this in a minute). Even if this is the case, it should not lead to underestimating the value that automation brings to economic activity.  As we think about the application of these technologies to trading markets, it is no leap of the imagination to consider how automation could help reduce cost and bring efficiencies to trade matching, processing, and clearing and settlement.  Indeed, when paired with systems inspired by DLT that standardize and distribute data to market actors – and even regulators – we begin to see a world where the majority of standard tasks are managed by machines.[8]

A New Age of Artificial Intelligence

If data is King, and automation is the work of the King’s Court, then machine learning and AI may be the tools to build an enlightened Kingdom.[9]  Undoubtedly, historical efforts to create general AI have largely failed to live up to the promise.[10]  Today, however, it appears a combination of enhancements in computing power and breakthroughs in computer science may mark the beginning of what will be a steady stream of advancements around the development of machine learning and AI.[11]  And these advancements are likely to have a profound impact on our economy, markets, and by extension on how we regulate.

In fact, development and deployment of AI in financial services and amongst regulators is said to be accelerating.[12]  Investment banks and insurance firms are utilizing AI in automation of repetitive acts, such as know your customer, anti-money laundering, claims processing, trade reconciliations and fraud identification.[13]  Hedge funds, prop desks and asset managers are increasingly leveraging AI techniques to automate trading strategies to achieve, maintain and increase potential trading returns[14] and support or disprove fundamental investment theories.[15]  We at the CFTC have been exploring application of machine learning techniques through our Division of Enforcement and in conjunction with our Whistleblower Program, and colleagues at FINRA have made great and informative strides in leveraging the cloud and machine learning, particularly in the surveillance space.[16]

Moreover, market participants are indeed moving well beyond mere automation of tasks.  No longer is artificial intelligence simply automation based on heuristic modeling and programming “if-then” rules.  New machine learning techniques make it possible for computers to learn on their own.[17]  Unlike mere RPA, recent breakthroughs in machine learning are predicated on the idea that data can be fed to machines without prescribed rules – instead, the machine can sort data and find connections and correlations that may not be observable to a human analyst.[18]  This, of course, does raise questions about “explainability,” which I will discuss further momentarily, but more broadly will underpin further technological advances.

At their core, efforts to advance machine learning are focused on enhancing predictive and actionable analytics capabilities.  Activity will likely occur across this spectrum with ongoing quant and computer science breakthroughs making even more complex predictive analytics and operations possible.  Today, innovators have already begun integrating machine learning and RPA, with this collaboration referred to as “cognitive RPA.”  While RPA focuses on the simple repetitive tasks explained earlier, the machine learning function is identifying patterns or making predications in order to help prioritize tasks for the RPA system.[19]

What is also becoming clear is that these higher-order computing technologies are likely to become as ubiquitous to our commodity and financial derivatives markets as the Internet has become today.  This means that all organizations and actors – including market regulators like the CFTC – will need to keep pace with the advance of AI in order to succeed.  Indeed, as management author Ram Charan stated, “Any organization that is not a math house now or is unable to become one soon is already a legacy [organization].”[20]

Quantitative Regulation Supporting Human Judgement

As we think about market regulation that is powered by data automation and machines, it is easy to fear what this may mean for humans.[21]  And, to be sure, there will be real challenges presented by these technologies ranging from impacts on labor markets to questions surrounding the explainability of a machine’s conclusions or actions and their consistency with existing regulations to the societal impact of big data collection and automating traditionally human processes. [22]

Yet, in my view, being a quantitative regulator does not mean replacing human judgment and market intelligence; it means reinforcing it.  In fact, the objective of quantitative regulation is to more firmly support the skilled teams that carry out the agency’s ongoing activities in market surveillance, enforcement, regulatory compliance and rule examinations, market intelligence, policy development, and market reform.  It means freeing agency staff from repetitive and low value tasks to focus on high value activities that require their expert judgment and domain knowledge.[23]  It means marshalling quality data that is efficiently and, perhaps, algorithmically analyzed upon which human judgement can be deployed, unfurled and expanded.[24]  Quantitative regulation means melding machines and humans, not separating them.

For example, on the oversight side of the spectrum one can envision using machines to independently identify segments of the markets where concentration risks or unrecognized counterparty exposures are emerging and flag them for staff consideration and action.  In enforcement, new machine-learning based surveillance tools could sniff out patterns of likely illegal trading activity or attempts to manipulate markets for enforcement analysis.  These tools will become even more paramount as emerging blockchain technologies seek to decentralize markets or disintermediate traditional actors.  It is critical that we have the ability to keep pace with those who attempt to defraud, distort, or manipulate.

We can also envision the day where rulebooks are digitized, compliance is increasingly automated or built into business operations through smart contracts, and regulatory reporting is satisfied through real-time DLT networks.  The machines here at the CFTC would have the ability to communicate regulatory requirements and consume and analyze the data that comes in through such systems.

This last point is one worth pondering for a few minutes further.  The ability to digitize rule-sets and consume, process, and analyze data in real-time could very well be the capability that allows us to explore application of so-called “agile regulation.”  Rather than rely on static rules and regulations that were put in place without knowing exactly the consequences or results they would drive in the market, we may be able to actually measure data, real-world outcomes, and success in satisfying regulatory objectives.[25]

An example may be helpful here.  Imagine you've got a speed limit sign - that's a static speed limit.  The two regulatory objectives you're trying to solve for with a speed limit are safety and the efficient flow of traffic.  If you actually had a dynamic speed limit that measured road and weather conditions (imagine a digital display), you might be able to slow the speed limit down if it’s raining in order to better satisfy the safety objective or increase the speed limit on a sunny weekday afternoon when traffic is light to achieve a safe, but more efficient, flow of traffic.[26]

This is a good example of thinking about agile forms of regulation, and regulation that is tied to satisfying real-world objectives.  As we move forward in transforming our capabilities, we should also think about transforming how we regulate.  As machines assume more economic tasks and functions, we would expect that these machines can be programmed with rules that ensure compliance with laws and regulations.  Indeed, the field of “RegTech,” which holds promise in enhancing compliance capabilities, and at lower cost, will be intertwined with advances in machine learning.  The concept of machine executable regulatory rulebooks will increasingly become a reality.

The Realization of CFTC 2.0

The launch of LabCFTC in May 2017 was in recognition that market regulation needed to keep pace with today’s digital transformation.[27]  Financial markets have always been amongst the quickest to adopt emerging technologies and continue to do so.  Falling behind in terms of basic understanding or adoption of technological advancements would undermine the agency’s effectiveness in overseeing the safety and soundness of contemporary markets.

To address this concern, we organized a series of our efforts under a LabCFTC work stream called “CFTC 2.0.”  Beyond LabCFTC’s broader engagement focus, the goal of CFTC 2.0 is to understand, test, facilitate, and in some cases incorporate emerging technologies that can improve the efficiency and effectiveness of our markets or our core activities as a regulator.  These efforts can serve to help inform our internal technology strategies as well as broader policy considerations.

To this end, we have been able to identify new surveillance tools through our outreach efforts, and are exploring ways to stimulate innovative activity through planned competitions.[28]  We are also assessing new RegTech tools and how smart contracts can code compliance, as well as considering the role of the regulator in advancing robo-rulebook efforts.  We continue to learn how cloud technologies and interoperable database systems are driving the next evolution in computing infrastructure.

Ultimately, our learnings over the past year have led us to a simple but profound conclusion:  we have no choice but for the CFTC to adopt effective and up-to-date, big data analysis capability.  Although our present capacity has adequately served the agency through the past few years,[29] it struggles to keep pace with our growing data needs.  Commercial trade execution and strategy in CFTC regulated markets is increasingly driven by quantitative data analysis of highly granular market data.  We must increase our own big data analysis capability to increase market intelligence, optimize market surveillance and oversight, and formulate smart policy prescriptions. The right path, indeed, the only path is to combine robust data collection, automated data analysis and state-of-the-art artificial intelligence capability to transform the CFTC into a highly effective, big data math shop – what I call a “Quantitative Regulator.”

A Roadmap for Modernization

The starting point for any modernization effort is the recognition of the enormous and comprehensive body of high quality trading data currently received by the CFTC.  That preeminent body of data needs to be processed through an upgraded, state of the art data collection and automated analysis engine processing both structured and unstructured data with enhanced AI capability.  This engine needs to be overseen by a highly competent, in-house data architect and trained support team.  These efforts will enhance our existing activities in the AI area.

We also must provide greater prominence within our agency for quantitative data collection and analysis.  We should decouple it from the management of computer and communications hardware and systems.  To this end, I have developed a belief and vision that the CFTC should establish a new office of data and analytics as a stand-alone department ready and capable of serving the needs of the Commission and the operating divisions.  The new office would be headed by a Chief Data Officer with strong data science qualifications.  And, of course, establishing such “QuantReg” capabilities would be based on thoughtful and prudent technology and procurement strategies that ensure we satisfy the end-goal of more effective and efficient regulation.

I look forward to exploring these concepts further and working on a bipartisan basis with my fellow Commissioners, Members of Congress, the current Administration, outside thought-leaders, and leading technology and trading firms.  I am confident that there are many leaders and stakeholders who share my conviction that America must lead when it comes to quantitative data collection and analysis in regulating contemporary digital markets.

Quantitative Regulators Must be Responsible

Before concluding, I would like to come back to a thread I have raised throughout my remarks – and that is with respect to the need to be careful, thoughtful, and responsible in identifying and addressing inevitable challenges that will arise with the rise of machines.

With quantitative regulatory capabilities comes great responsibility.  For example, market participants are rightly concerned about the handling of confidential data.  That is why it is appropriate for the CFTC generally to undertake to gather no more data than it is prepared to analyze, and then process and analyze all of the data that it does in fact collect.  The centrality of data does not justify boundless regulatory fishing expeditions, and we must be vigilant against the risk of overreach.  Additionally, the CFTC must handle all data in a confidential manner with the highest level of security and protection, and the CFTC should be candid and transparent in its data collection practices and uses.  Finally, the CFTC should to the greatest extent possible seek to make publically available anonymized data and value-added data analysis that can be utilized broadly by market participants.[30]


American derivatives markets are the world’s largest, most developed and most influential.  They are also among the world’s best regulated.  The CFTC has overseen the U.S. exchange-traded derivatives markets for over 40 years.  The agency is recognized for its principles-based regulatory framework and econometrically-driven analysis.  The CFTC is recognized around the world for its depth of expertise and breadth of capability.

This combination of regulatory expertise and competency is one of the reasons why U.S. derivatives markets continue to serve the needs of participants around the globe to hedge price and supply risk safely and efficiently.  It is why well-regulated U.S. derivatives markets continue to serve a vital national interest – safe and efficient U.S. Dollar based commodity and financial risk transfer.

Yet, modern markets are rapidly going through a fundamental data and technological transformation.  The amount of market data continues to grow exponentially and infinitely more granular, quantitative data analysis increasingly drives commercial trade execution and strategy, and limited agency funding requires increasing operational efficiency.

We have seen the computational revolution that has taken place in the past seven decades since the birth of the Manchester Baby.  We have seen the digital revolution that has taken place in the last decade of the seven.

In the next decade, the CFTC and, indeed, all market regulators, have no choice but to transform alongside modern digital markets and become quant-driven agencies conducting robust data collection, automated data analysis, and state-of-the-art artificial intelligence.  We have no choice but to become highly effective, “Quantitative Regulators.”

The world’s preeminent derivatives markets need the world’s most advanced regulatory and technological competency.  The time has come for the CFTC to match its unparalleled market intelligence capability with unparalleled quantitative data analytical capability.  The CFTC is ready to lead the world in QuantReg.

I look forward to hearing from all of you – leaders in law, our markets and in technology – as we move forward with our transformation.  It’s an exciting world we live in; one filled with new ideas, innovations, and opportunities.  And we at the CFTC look forward to confidently and proactively stepping into the future.

Thank you.

[1] Klaus Schwab, “The Fourth Industrial Revolution: What It Means; How to Respond,” World Economic Forum, Jan. 14, 2016,

[2] The Technology Advisory Committee, under the sponsorship of Commissioner Brian Quintenz, has been particularly active, having already formed four subcommittees examining critical and timely topics in detail.  One subcommittee, focused on the modern trading environment, is evaluating the true risks of algorithmic and automated trading, private sector incentives and responses to controlling operational risk, and any gaps therein where regulatory solutions are necessary.  Other subcommittees are addressing questions surrounding virtual currency including suggesting self-regulatory policies for trading platforms, Distributed Ledger Technology and any associated regulatory applications, and internal and external cybersecurity practices and protocols.

[3] The “Market Intelligence Branch” was created in the Division of Market Oversight to understand, analyze and communicate current and emerging derivatives market dynamics, developments and trends – such as the impact of new technologies, asset classes and trading methodologies – to increase the agency’s knowledge of evolving market structures and practices and promote efficient and sound markets.

[4] LabCFTC engages directly with emerging technologies, including DLT and Blockchain, machine learning and artificial intelligence, and cloud.  These new technologies underpin crypto assets, smart contracts, algorithmic trading, as well as new compliance and supervisory techniques, all of which have been – and will continue to be – key focus areas for the CFTC.  Visit us at

[5] With the “Internet of Things,” smart sensors are able to track and report an incredible range of information, potentially including meteorological conditions or the provenance of agricultural commodities; DLT can enable real-time trade data aggregation and relay such information in a standardized form among a broad range of market participants; and cloud technologies can enable data storage capacity unimaginable just a decade ago.

[6] Indeed, the forced standardization of data formats and fields and collective use of the system by multiple actors may prove to be some of the most compelling aspects of DLT.  This dynamic should result in more usable and deeper data sets that can be fed to machines – and this notion is not lost on us as we think about ways to improve data reporting in our space.  In many ways, DLT and blockchain-inspired database systems may help move us to a 2.0 version of back-office computing infrastructure that paves the way for advances in automation and machine learning.

[7] Clint Boulton, “What is RPA? A Revolution in Business Process Automation,” CIO, Sept. 3, 2018,

[8] Data automation technologies are in fact already being broadly adopted and successfully integrated.  For instance, a large commercial bank had a positive experience with RPA. The bank restructured its claim process and utilized RPA software to manage 1.5 million claim requests per year.  This resulted in an added capacity equivalent to over 200 full-time employees but with only approximately 30 percent of the hiring cost for additional employees.  The bank also recorded an approximate “27 percent increase in tasks performed ‘right first time’.” David Schatsky, et al., “Robotic Process Automation: A path to the Cognitive Enterprise,” Deloitte Insights, Sept. 14, 2016,

[9] The idea that machines will be able to match – and then surpass – human intelligence has captured our collective mindshare for decades.  Of note is the 1968 film, 2001: A Space Odyssey, featuring HAL, the super machine that slowly causes chaos through a series of malfunctions and whose name stands for “Heuristically programmed algorithmic computer.”

[10] In his book “Superintelligence,” Professor Nick Bostrom notes that roughly every decade in recent history there has been a period of promise for artificial intelligence followed by disillusionment, despair, and then disregard. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014).

[11] Randy Bean, “How Big Data Is Empowering AI and Machine Learning at Scale,” MIT Sloan, May 8, 2017,

[12] Bloomberg Professional Services Blog: “The Race to AI Utilization in Finance is a Marathon, Not a Sprint,” Oct. 4, 2017,

[13] Id.

[14] Id.

[15] Jayesh Punater, Blog Post: “Big Data, Machine Learning, AI…Blah Blah Blah!” Managed Funds Association, Jan. 12, 2017,

[16] FINRA Podcast, How the Cloud and Machine Learning Have Transformed FINRA Market Surveillance, July 17, 2018,

[17] A recent McKinsey paper on the subject offers a useful definition stating that “[m]achine learning is based on algorithms that can learn from data without relying on rules-based programming.” Dorian Pyle & Cristina San José, “An Executive’s Guide to Machine Learning,” McKinsey Quarterly, June 2015,

[18] This approach is what has allowed a Stanford computer to be able to independently identify an image as being a cat when presented with such an image, and is typically what people mean when referring to concepts like “deep learning.” Ibid.

[19] David Schatsky, et al., “Robotic Process Automation: A path to the Cognitive Enterprise,” (see footnote 8).  For example, a company utilized RPA to automate refunds to customers when their trains were delayed.  When refund requests were received, the machine learning component analyzed the customer’s complaint; it read the text and understood language, including the “meaning and sentiment,” then categorized the information to allow the RPA tool to quickly process the information and issue a refund.

[20] Dorian Pyle & Cristina San José, “An Executive’s Guide to Machine Learning,” (see footnote 17).

[21] There are real considerations and concerns we should have – for example, workforce replacement by machines; those concerns, however, are beyond the scope of this speech. See Yuval N. Harari, “Why Technology Favors Tyranny,” The Atlantic, Oct. 2018, See also footnote 19.

[22] There undoubtedly will be novel challenges that AI presents and that will require careful thought, principles, and human ethics to navigate.  For example, how do we ensure that machines do not embed bias into their reasoning or lack the ability to explain to human operators the basis for their decisions?  And how should we handle the potential development of powerful centralized computing systems that provide certain individuals or groups with access to data and analysis that many would argue violate our privacy norms?  These are just a few of the questions that we will need to address as a society.

[23] “Fundamentally, humans with certain skills will be valued less for what they are doing now while humans combined with technology have the capacity to become more productive, resourceful and capable than we have ever been.”  Jayesh Punater, “Data vs. Relationships – Who Wins?” Managed Funds Association, June 21, 2017,

[24] As used in markets today, quantitative data analytics and AI are typically used to flag potential issues, solutions or problems to humans rather than replacing experienced staff. John O’Hara, “AI and the Value of Human Judgement,” Tabb Forum, Mar. 20, 2017,

[25] Chris Brummer & Daniel Gorfine, “Fintech: Building a 21st-Century Regulator’s Toolkit,” Milken Institute, Oct. 21, 2014,

[26] See Tim O’Reilly, Open Data and Algorithmic Regulation, Beyond Transparency,, (last visited Nov. 6, 2018); Aaron Stanley, “LabCFTC Director Daniel Gorfine Talks Inaugural Year, U.S. Fintech Regulation,” Forbes, Aug. 22, 2018,

[27] Written Testimony of Chairman J. Christopher Giancarlo before the Senate Banking Committee, Washington, D.C., Commodity Futures Trading Commission (Feb. 6, 2018); see also Testimony of Chairman J. Christopher Giancarlo before the Senate Committee On Appropriations Subcommittee on Financial Services and General Government, Washington, D.C. Commodity Futures Trading Commission (June 5, 2018).

[28] CFTC Asks Innovators for Competition Ideas to Advance Fintech Solutions, (Apr. 24, 2018) Commodity Futures Trading Commission,

[29] In large measure through the admirable efforts of dedicated agency staff.

[30] Perhaps this may be an area worthy of exploration through innovation competitions.