In this new paper from the Rock Center for Corporate Governance at Stanford, The Artificially Intelligent Boardroom, the authors discuss the potential impact of artificial intelligence on boardroom practices—impact that they believe will be significant, “but perhaps in different ways than is commonly recognized.” While managements and boards have been practically transfixed by the prospective application of AI across the operations of companies, the authors point out that much less attention has been paid to how AI might be applied to “reshape the operations and practices of the board itself, with the prospect of substantially improving corporate governance quality.” The authors expect AI to affect board function, board processing of information, contributions of board advisors and, especially, board/management interaction. The adoption of AI, they advise, “will also raise important questions about how to maintain the line between board and managerial responsibilities, and how expectations on each side will change.” Will AI fuel expectations for board performance? Will it increase the burden on directors to meet those expectations?

As described by the authors, in the traditional view of corporate governance, managements and boards each have separate responsibilities: management’s responsibility is operational—running the company—and the board’s duty is oversight of management. Because of their different roles, the authors explain, there is frequently substantial information asymmetry between boards and managements. And, they maintain, unless there are “red flags,” the board may “rely on information provided by management to inform its decisions and carry out this oversight.”  Under current board practices, boards obtain information from management through various meetings and communications, and boards respond by asking questions and requesting additional information or, in some cases, using third-party advisors to provide additional information or outside perspectives.  In this way, and “if the board makes decisions with due deliberation and without conflict of interests,” the board “will have satisfied its fiduciary duty to shareholders.” 

But here’s the rub:  there are many examples that illustrate the “insufficiency of this arrangement. Many boards have been woefully uninformed about the financial, operating, and strategic risk of management decisions—as borne out through repeated examples of corporate meltdowns over the years. Boards have erred in situations of CEO selection, financial reporting, product liability, compensation setting, and reputation management.” The authors point to a 20-year-old study by Deloitte highlighting “a surprising disconnect between the information board members say are important drivers of corporate performance and the information and metrics boards actually receive to monitor this performance.”  The “fundamental problem,” the authors conclude, is one of “information flow between management and the board.”

AI, the authors assert, can reduce this information asymmetry and thus change the dynamic between boards and managements in several ways: increasing the “volume, type, and quality of information available to management and boards”; reducing the likelihood that boards will “be ‘in the dark’ about the operating and governance realities of their companies”; increasing the burden—and expectation—that both parties will spend more time on meeting prep to review, digest and analyze the larger quantity of information available prior to, not just during, board meetings; allowing the supplementation or even replacement of information provided by third-party advisors and consultants; and increasing the “breadth of analysis available to the board, coupling the retrospective review of mostly historical data (prevalent today) with more powerful tools for predictive and trend analysis. These tools will allow boards to be more proactive and less reactive.” But, the authors later ask, how will management “react to a governance setting where boards have more access and transparency into internal operations?” Will AI provide directors with “so much access to information and analysis that they are close to becoming managers? Given the breadth and depth of analysis available through AI, who will prevent directors from asking questions that ‘cross the line?’”

The authors identify a number of other benefits that AI will offer to boards and managements. Boards will be able to conduct real-time analyses, search for alternative or supplemental information during a board discussion (allowing for more expeditious decision-making), perform “more robust scenario planning and [make] potentially richer suggestions.” By the same token, the authors theorize, AI should also better position executives to respond to board inquiries and challenges.  The authors suggest that executives will be able to run simulations and ask an AI interface questions about their own presentations, such as identifying the greatest weaknesses and flaws in their arguments. They can even ask AI to anticipate difficult questions. (Of course, as the authors later inquire, are today’s directors even equipped to use this technology or will they require substantial training?)

As noted above, historically, the contribution of board members has been largely responsive to management and the information it provides. The most important question that AI raises for board members, the authors contend, is what impact AI will have on board contributions; that is, with substantially greater access to information and analysis the extent of which might well eclipse information provided by management, will “the expectations for a director’s diligence in reviewing and preparing this information…be exponentially higher, and the quality of questions, challenges, and insights…also be expected to be correspondingly higher”?  The authors note that, while commercial vendors promise acceleration of board prep time with AI, “companies that significantly increase information flow to directors report that doing so increases meeting prep time by raising expectations on the level of preparedness expected of directors.”

But, the authors ask, should boards even have unlimited access to all the information that an AI interface—which itself has full access to all data in the corporate data repository—can provide?  And would boards even want unrestricted access?  The authors caution that circumscribing the information available to directors “will require careful thinking. Boards and their counsel will have to determine how boards rely on AI analysis conducted by an individual director that was not provided by management. What impact this has on fiduciary expectations is unknown.” As the authors subsequently ask, “[h]ow can directors embrace a larger role in analysis and decision making without increasing their personal liability?” In this context, the authors raise the issue of a director conducting his or her own “research” using AI who “discover[s] activities or decisions that raise new questions. Failure to make inquiry might introduce legal complexities.” One challenge the authors identify is that AI will probably “generate a high number of red flags related to internal and external practices or threats,” which will require boards to assess “materiality risk in determining which risks require additional investigation, how to prioritize them, and how not to create a paper trail that increases the board’s own liability.” 

The authors also looked at the impact of AI in specific governance areas.  For example, with regard to strategy (e.g., scenario planning, testing assumptions, identifying risk, and prioritizing investment) the authors suggest that some work that otherwise might have been outsourced to consultants may now be performed in-house and potentially compared to the recommendations of outside consultants. In determining compensation, the compensation committee will be able to use analytical and benchmarking tools to evaluate compensation design and “analyze sensitivity of pay to peer groups selection in real-time, predict proxy advisor recommendations, and consider tax and legal implications.” The authors identify similar advantages that AI analytics will offer to boards on human capital management, such as applying pattern recognition to workforce data; audit, such as AI surveillance tools that “look for internal control weakness and identify potential fraud”; legal, such as monitoring emerging legal and regulatory developments and legal actions; and board evaluations, such as AI tools that can analyze board effectiveness, coach and advise, measure engagement and time allocation, and assess whether the directors are “primarily reactive or proactive.” The paper describes a host of other benefits that AI may offer to boards on specific governance topics. (Of course, as the authors later inquire, are today’s directors even equipped to use this technology or will they require substantial training?)

While AI offers may benefits, the authors also point to a number of potential risks and challenges. First, the authors observe that public companies that compete largely against private companies will need to think carefully “about the information they feed into models and how to perform benchmarking analysis using public, audited data versus privately sourced data that may carry inaccuracies or biases.” Another significant risk they identify is that current AI models generate a substantial number of errors, whether as a result of inherent biases, the quality of input, computational errors, competitive intelligence or just AI “hallucination.”  The authors observe that AI “does not always say ‘I don’t know’ to questions it might not know an answer to, grabbing available data to answer a question when the data might not be directly applicable. Boards and managers will need to learn how to fact check output before relying on it. This will require deeper (human) familiarity with the data. Boards will need to educated on these and other limitations of this technology.” In addition, the authors raise concerns about protection of this sensitive data from cybersecurity and hacking threats and other unauthorized access.

Finally, the authors advise that “board members will have to train themselves not to fall victim to excessive analysis (‘analysis paralysis’), but keep their effort focused on practical and efficient outcomes that benefit the corporation and its stakeholders. To this end, board and committee chairs will need to exhibit stronger leadership skills to manage meeting dynamics effectively and ensure that analyses and conversations remain on track.”

Posted by Cydney Posner