Overseeing Artificial Intelligence: Moving Your Board from Reticence to Confidence

Ross Pounds

Corporations have discovered the power of artificial intelligence (AI) to transform what’s possible in their operations. Through algorithms that learn from their own use and constantly improve, AI enables companies to:

  • Bring greater speed and accuracy to time-consuming and error-prone tasks such as entity management
  • Process large amounts of data quickly in mission-critical operations like cybersecurity
  • Increase visibility and enhance decision-making in areas from ESG to risk management and beyond

But with great promise comes great responsibility — and a growing imperative for monitoring and governance.

“As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention,” writes Washington, DC-based law firm Debevoise & Plimpton.

Many boards have hesitated to take on a defined role in AI oversight, given the highly complex nature of AI technology and specialized expertise involved. But now is the time to overcome such hesitation. AI solutions are moving rapidly from niche to norm, and their increased adoption has heightened scrutiny by regulators, shareholders, customers, employees and the general public.

Here we outline how AI-related risks are escalating, why this puts more pressure on corporate boards and what steps directors can take right now to grow more comfortable in their AI oversight role.

Increasing AI Adoption Escalates Business Risks

Developments last year at online real estate company Zillow dramatically illustrate the bottom-line impact when AI solutions are overestimated or go awry. At the beginning of 2021, the company’s Zillow Offers division launched AI-powered capabilities to streamline the property valuation process. After algorithms informed a host of home purchases at prices higher than what they could be sold for, Zillow took a $304 million third-quarter inventory write-down, saw its stock plunge, shuttered Zillow Offers entirely and announced plans to cut 25% of its staff.

Across industries, AI poses challenges to the rising board priority of ESG, despite the technology’s formidable ability to automate and accelerate data collection, reporting and analysis. First, there’s its sometimes overlooked environmental impact. For just one image-recognition algorithm to train itself how to recognize one type of image, for example, it needs to process millions of images. All of this processing requires energy-intensive data centers.

“It’s a use of energy that we don’t really think about,” Professor Virginia Dignum of Sweden’s Umeå University told the European Commission’s Horizons magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.”

AI can also have a negative impact on the “S” in ESG, with examples from the retail world demonstrating AI’s potential for undermining equity efforts, perpetuating bias and causing companies to overstep on customer privacy.

At Amazon, an AI-fueled HR recruitment tool demonstrated a preference for male candidates. An algorithmic model trained on a decade’s worth of submitted resumes, mostly from men, downgraded candidates who were graduates of women’s colleges. In the area of data privacy, algorithms and data analytics may collect and reveal more than intended, with disastrous consequences. This was the case when Target’s “pregnancy prediction score” to anticipate the shopping habits of expecting customers inadvertently revealed a teenage girl’s pregnancy to her family.

Regulators Expand Their Scrutiny to Corporate Boards

Regulators worldwide have been watching AI’s unintended consequences and responding. In April 2021, the European Commission published its draft legislation governing the use of AI; if passed, it would place strict requirements on companies using AI systems based on their potential risk to the user. Those deemed an “unacceptable risk” could be banned outright. A New York City law, which takes effect in 2023, regulates the use of “automated employment decision tools” to screen candidates, which could result in AI-generated bias based on race, ethnicity, or sex.

AI regulation has been intersecting with a development in the legal world from an entirely different industry and decade: the 2015 Caremark decision holding two officers of Blue Bell Creamery responsible for breaching their duties of care and loyalty by knowingly disregarding risks — in this case, a deadly outbreak of listeria — and failing to oversee the safety of the company’s operations.

  • More and more, AI problems are becoming board problems, and AI oversight a board responsibility. Around the world, regulatory authorities are codifying the spirit of Caremark into law. Examples include:
  • Principles by the Hong Kong Money Authority holding the board and senior management accountable for AI-driven decisions, with leadership charged to ensure appropriate AI governance, oversight, accountability frameworks and risk mitigation controls
  • A suggestion by the Monetary Authority of Singapore that firms set approvals for highly material AI decisions at the board/CEO level, with the board maintaining a central view of these decisions and receiving periodic updates of company AI use
  • Emphasis by the UK Financial Conduct Authority and Bank of International Settlements that boards and senior management start tackling AI’s major issues “because that is where ultimate responsibility for AI risk will reside,” in the words of Debevoise & Plimpton

Steps Your Board Can Take to Get AI Savvy

How can corporate boards stay on top of AI developments, and ahead of AI-related risk? Guidance follows, drawing from Debevoise & Plimpton and Diligent’s own insights.

  • Strengthen expertise: Evaluate your current AI knowledge base and comfort level. If you’re concerned the necessary expertise isn’t yet there, consider AI training — or adding another director. Also consider getting at least a few directors up to speed on your organization’s key AI systems: what they do, how they use data, and associated operational, regulatory, and reputational risks.
  • Designate ownership: Integrate the topic of AI and related risk management issues into the board’s agenda. Clearly designated roles and responsibilities. While ownership might initially reside with the full board, it could be wise to delegate that responsibility to a specific committee, whether an existing cybersecurity team or through the creation of a new committee dedicated to solely to AI oversight.
  • Formalize policies and procedures: Establish reporting requirements and an overall compliance structure. Effective requirements might include regular risk assessment efforts and the continuous monitoring of certain systems or introducing specific policies and procedures around how management should immediately respond to an adverse event.
  • Prioritize AI awareness and transparency: Internally, use briefings to remain up to speed on all AI-related incidents and material investigations. Externally, cultivate transparency by including detailed accounts of oversight and compliance activities in board minutes, and share activities and risks as appropriate in materials regularly made available to shareholders.

While overseeing AI-related risks can seem intimidating and complex, boards can simplify the process and strengthen peace of mind through education, awareness and a plan for structured oversight.

The right technology can help your board move more quickly and confidently into an AI oversight role. Schedule a meeting with Diligent today to find out more.

Related Insights
Ross Pounds
Ross Pounds, a Senior Manager at Diligent and expert in ESG, also has deep experience in governance, risk, audit and compliance. Ross has done extensive work on how organizations can prepare for climate accounting regulations and best achieve sustainability and diversity goals.