Tag: artificial intelligence
Are boards overseeing AI?
Is there a hotter topic in the business world than AI? AI offers major opportunities for progress and productivity gains, but substantial risks as well. According to FactSet, 179 companies in the S&P 500 used the term “AI” during their earnings call for the fourth quarter of 2023, well above the 5-year average of 73. Among these companies, “the average number of times ‘AI’ was mentioned on their earnings calls was 13, while the median number of times ‘AI’ was mentioned on their earnings calls was 5. The term ‘AI’ was mentioned more than 50 times on the earnings calls of nine S&P 500 companies.” Similarly, Bloomberg reports that “[a]t least 203, or 41%, of the S&P 500 companies mentioned AI in their most recent 10-K report, Bloomberg Law’s review found. That’s up from 35% in 2022 and 28% in 2021. A majority of the disclosures focused on the risks of the technology, while others focused on its benefit to their business.” One of the many challenges that AI presents is on the corporate governance front, in particular board oversight, a topic addressed in this recent paper from ISS, AI Governance Appears on Corporate Radar. For the paper, ISS examined discussions of board oversight and director AI skills in proxy statements filed by S&P 500 companies from September 2022 through September 2023 to “assess how boards may evolve to manage and oversee this new area of potential risks and opportunities.”
What happened at the Corp Fin Workshop of PLI’s SEC Speaks 2024?
At the Corp Fin Workshop last week, a segment of PLI’s SEC Speaks 2024, the panel focused on disclosure review, a task that occupies 70% of Corp Fin attorneys and accountants. The panel discussed several key topics, looking back to 2023 and forward to 2024. Some of the presentations are discussed below.
Gensler talks about AI (and a bit about climate)
Yesterday, in remarks at Yale Law School, SEC Chair Gary Gensler talked about the opportunities and challenges of AI. According to Gensler, while AI “opens up tremendous opportunities for humanity,” it “also raises a host of issues that aren’t new but are accentuated by it. First, AI models’ decisions and outcomes are often unexplainable. Second, AI also may make biased decisions because the outcomes of its algorithms may be based on data reflecting historical biases. Third, the ability of these predictive models to predict doesn’t mean they are always accurate. If you’ve used it to draft a paper or find citations, beware, because it can hallucinate.” In his remarks, Gensler also addressed the potential for systemic risk and fraud. But, in the end, he struck a more positive note, concluding that the role of the SEC involves both “allowing for issuers and investors to benefit from the great potential of AI while also ensuring that we guard against the inherent risks.”
Could AI trigger a financial crisis?
In remarks on Monday to the National Press Club, SEC Chair Gary Gensler, after first displaying his math chops—can you decipher “the math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters”?—discussed the potential benefits and challenges of AI, which he characterized as “the most transformative technology of our time,” in the context of the securities markets. When Gensler taught at MIT, he and a co-author wrote a paper on some of these very issues, “Deep Learning and Financial Stability,” so it’s a topic on which he has his own deep learning. The potential for benefits is tremendous, he observed, with greater opportunities for efficiencies across the economy, greater financial inclusion and enhanced user experience. The challenges introduced are also numerous— and quite serious—with greater opportunity for bias, conflicts of interest, fraud and platform dominance undermining competition. Then there’s the prospective risk to financial stability altogether—another 2008 financial crisis perhaps? But not to worry—Gensler assured us, the SEC is on the case.
You must be logged in to post a comment.