Yesterday, in remarks at Yale Law School, SEC Chair Gary Gensler talked about the opportunities and challenges of AI. According to Gensler, while AI “opens up tremendous opportunities for humanity,” it “also raises a host of issues that aren’t new but are accentuated by it. First, AI models’ decisions and outcomes are often unexplainable. Second, AI also may make biased decisions because the outcomes of its algorithms may be based on data reflecting historical biases. Third, the ability of these predictive models to predict doesn’t mean they are always accurate. If you’ve used it to draft a paper or find citations, beware, because it can hallucinate.” In his remarks, Gensler also addressed the potential for systemic risk and fraud. But, in the end, he struck a more positive note, concluding that the role of the SEC involves both “allowing for issuers and investors to benefit from the great potential of AI while also ensuring that we guard against the inherent risks.”
In terms of macro issues, one concern he raised was about AI and systemic risk. That risk might arise if just a few platforms ultimately dominate the field, resulting in only “a handful of base models upstream.” That, he said, “would promote both herding and network interconnectedness. Individual actors may make similar decisions as they get a similar signal from a base model or rely on a data aggregator. Such network interconnectedness and monocultures are the classic problems that lead to systemic risk.” Current guidance for risk management will need to be reimagined; the “challenges to financial stability that AI may pose in the future will require new thinking on system-wide or macro-prudential policy interventions.”
On a more micro level, Gensler addressed fraud and AI washing. Citing a paper by former SEC Commissioner Kara Stein, he observed that AI can result in programmable harm, predictable harm and unpredictable harm. Programmable harm involves intent and, therefore, potential liability is fairly straightforward. Predictable harm involves a “reckless or knowing disregard of the foreseeable risks of your actions” in deploying a particular AI model. Did the actor act reasonably to prevent its AI model from illegal actions, such as front-running, spoofing or giving conflicted investment advice? Were there appropriate guardrails in place? Were the guardrails tested and monitored? Did the guardrails take into account the possibility that the AI model may be “learning and changing on its own,” or may “hallucinate” or “strategically deceive users”? With regard to potential liability for truly unpredictable harm, he said, that will play out in the courts. Quoting the first SEC Chair Joseph Kennedy, he said: “The Commission will make war without quarter on any who sell securities by fraud or misrepresentation.”
Whenever there is a buzzy new technology, there are also often false claims about its use. But if “a company is raising money from the public,” Gensler cautioned, the company “needs to be truthful about its use of AI and associated risk.” He noted that “[s]ome in Congress have proposed imposing strict liability on the use of AI models.” He continued:
“As AI disclosures by SEC registrants increase, the basics of good securities lawyering still apply. Claims about prospects should have a reasonable basis, and investors should be told that basis. When disclosing material risks about AI—and a company may face multiple risks, including operational, legal, and competitive—investors benefit from disclosures particularized to the company, not from boilerplate language. Companies should ask themselves some basic questions, such as: ‘If we are discussing AI in earnings calls or having extensive discussions with the board, is it potentially material?’ These disclosure considerations may require companies to define for investors what they mean when referring to AI. For instance, how and where is it being used in the company? Is it being developed by the issuer or supplied by others? Investment advisers or broker-dealers also should not mislead the public by saying they are using an AI model when they are not, nor say they are using an AI model in a particular way but not do so. Such AI washing, whether it’s by companies raising money or financial intermediaries, such as investment advisers and broker-dealers, may violate the securities laws.”
Gensler also discussed the potential for AI to “hallucinate,” as well as the potential for it to build in biases and conflicts of interest.
The moderator also opened up the discussion for questions from the audience. Of course, there were a couple of questions about the SEC’s climate disclosure proposal. One audience member asked about the SEC’s role with regard to climate in light of the rules enacted by the EU, California and other jurisdictions? To another audience member, in one reality, the SEC certainly has a role in crafting climate disclosure requirements under the rule of law, but does the staff take into account the “other reality” that its proposed climate disclosure rules—and ESG rules in general—have become a flashpoint in the culture wars? Does the staff think about the “noise”? Gensler responded that the SEC is certainly not a climate regulator. But, given that, by 2022, a huge proportion of companies already provided some climate disclosure, including many that provided GHG data, the SEC has a role in ensuring consistency and comparability. Gensler observed that, of 34 rules the SEC has adopted so far, six have been challenged in court, and 28 have not. He viewed these legal challenges as very important aspects of democracy. Sustainable rulemaking was also important, and a bad loss can be harmful. In his view, the SEC was taking appropriate actions within the law, but as courts shift their interpretations, it could be more of challenge. Where will the courts be in 2025 or 2026? They were certainly looking at the different circuits. In the Fifth Circuit, for example, although that court recently vacated the SEC’s rules for disclosure regarding company stock repurchases, the court did not agree that the rules violated the First Amendment. Upholding the rule against the First Amendment challenge was legally very important in Gensler’s view. (It’s worth noting here that the Chamber of Commerce has already challenged California’s climate rules on the basis of the First Amendment. See this PubCo post.)