In remarks on Monday to the National Press Club, SEC Chair Gary Gensler, after first displaying his math chops—can you decipher “the math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters”?—discussed the potential benefits and challenges of AI, which he characterized as “the most transformative technology of our time,” in the context of the securities markets. When Gensler taught at MIT, he and a co-author wrote a paper on some of these very issues, “Deep Learning and Financial Stability,” so it’s a topic on which he has his own deep learning. The potential for benefits is tremendous, he observed, with greater opportunities for efficiencies across the economy, greater financial inclusion and enhanced user experience. The challenges introduced are also numerous— and quite serious—with greater opportunity for bias, conflicts of interest, fraud and platform dominance undermining competition. Then there’s the prospective risk to financial stability altogether—another 2008 financial crisis perhaps? But not to worry—Gensler assured us, the SEC is on the case.
Gensler looked at the anticipated impact of AI from both a narrow and a broad perspective. The growing capability of AI to tailor communications to each of us individually—which Gensler referred to as “narrowcasting”—raises or exacerbates a number of potential issues. The growing capacity of AI to make predictions about individuals, with outcomes that may be “inherently challenging to interpret,” could be problematic: the results of predictive algorithms could be based on incorrect information, or “on data reflecting historical biases” or “latent features that may inadvertently be proxies for protected characteristics.” Or, the AI system could create conflicts of interest by optimizing the interests of the platform of, say, the broker or financial adviser over the interests of the customer. The SEC’s most recent agenda indicates that the Division of Trading and Markets is targeting October 2023 for a proposal “related to broker-dealer conflicts in the use of predictive data analytics, artificial intelligence, machine learning, and similar technologies in connection with certain investor interactions.” [Update: Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers is on the SEC meeting agenda for July 26.]
And then, there are the obvious opportunities for fraud and deception, with “bad actors” using “AI to influence elections, the capital markets, or spook the public, potentially making Orson Welles 1938 ‘The War of the Worlds’ radio broadcast look tame.” Nevertheless, Gensler reaffirmed, “under the securities laws, fraud is fraud. The SEC is focused on identifying and prosecuting any form of fraud that might threaten investors, capital formation, or the markets more broadly.” He also makes the point that, for public companies that are disclosing AI opportunities and risks, it is important “to take care to ensure that material disclosures are accurate and don’t deceive investors.”
The macro-level risks are perhaps even more daunting. There will certainly be changes to the job market, and AI may well be turf on which the U.S. and China “compete economically and technologically.” More chilling are the nightmares of bad actors’ misuse of AI: “geopolitical challenges from state actors and militaries’ potential use of AI.” Or worse.
Here, Gensler focused on two prominent macro issues: First, the expectation that a small number of AI platforms will dominate the field through economies of scale and data networks, and second, the possibility of financial instability resulting from “herding” behavior. With regard to market domination, Gensler observed that, when downstream companies use a base AI model, the base model effectively “train[s] off of downstream applications”: the applications provide the models with more data, leading to a cycle of model improvement and greater competitive advantage. The ultimate result, Gensler suggested, could be one or a very small number of AI platforms, “garner[ing] greater economic rents” from their dominant positions. “For the SEC,” Gensler observed, “the challenge here is to promote competitive, efficient markets in the face of what could be dominant base layers at the center of the capital markets.”
With regard to financial instability, Gensler suggested that the use of AI might “heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator. This could encourage monocultures. It also could exacerbate the inherent network interconnectedness of the global financial system.” That is, a dominant base model could provide inaccurate information to market participants, leading them to use the same erroneous data to make the same wrong decisions and judgments, potentially roiling the markets. Or worse: the result might be a “future financial crisis.” Accordingly, Gensler maintained, risk management guidance will need to be updated to address these macro challenges: “Model risk management tools, while lowering overall risk, primarily address firm-level, or so-called micro-prudential, risks. Many of the challenges to financial stability that AI may pose in the future, though, will require new thinking on system-wide or macro-prudential policy interventions.”
Addressing the risks of AI could ultimately demand changes to the securities laws; however, Gensler assured us, the SEC is “focused on protecting against both the micro and macro challenges that I’ve discussed.”