AI and directors’ duties

0
772
LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

COMPANIES and their senior management are increasingly utilising artificial intelligence (AI) for a range of purposes. At the board level, however, most commentators to date have focused on the need for directors to exercise caution and to be aware of the associated legal risks of using AI. This column shifts the focus by exploring the circumstances in which directors’ duty of care might incorporate an obligation to use AI. It explores the relationship between corporate governance and AI, and then outlines the current duty of care frameworks in Australia, Hong Kong and mainland China. Finally, the column suggests three factors for determining whether directors’ duty of care might incorporate an obligation to use AI.

CORPORATE GOVERNANCE AND AI

Broadly speaking, corporate governance can be described as a framework of rules, relationships, systems and processes by which companies make decisions and are directed and controlled. The focus is not just on rules and relationships but also on the systems or processes by which corporate decisions are made. AI is a potential tool in the systems and processes that support corporate governance.

For companies, two areas in which AI has been adopted are risk management practices and internal audit processes and controls. For both companies and regulators, AI has been adopted in regulatory technology or “regtech”. In a previous issue (see China Business Law Journal, volume 10, issue 7: Regtech and corporate disclosure), this column discussed regulators applying natural language processing (NLP) to monitor compliance with corporate disclosure requirements. It also examined the legal and regulatory implications that might arise, including a lack of transparency in identifying how AI operates, and the appropriate level of human involvement to guarantee trust in the process.

DIRECTORS’ DUTY OF CARE

In many jurisdictions, company directors are subject to a duty of care, sometimes expressed as a duty of care and diligence. In Australia, the statutory duty of care and diligence is contained in section 180 of the Corporations Act, which provides that:

(1) A director or other officer of a corporation must exercise their powers and discharge their duties with the degree of care and diligence that a reasonable person would exercise if they:

    1. Were a director or officer of a corporation in the corporation’s circumstances; and
    2. Occupied the office held by, and had the same responsibilities within the corporation as, the director or officer.

In Hong Kong, section 465 of the Companies Ordinance provides:

Duty to exercise reasonable care, skill and diligence.

    1. A director of a company must exercise reasonable care, skill and diligence.
    2. Reasonable care, skill and diligence mean the care, skill and diligence that would be exercised by a reasonably diligent person with:

      (a) The general knowledge, skill and experience that may reasonably be expected of a person carrying out the functions carried out by the director in relation to the company; and
      (b) The general knowledge, skill and experience that the director has.

Along similar lines, article 147 of the Company Law of China stipulates:

The directors, supervisors and senior managers shall comply with the laws, administrative regulations, and bylaw. They shall bear the obligations of loyalty and diligence to the company.

Article 147 imposes two duties on directors and other managerial staff: diligence (generally understood to be equivalent to a duty of care) and loyalty (also translated into English as a “duty of fidelity”).

For our purposes, a question is whether the failure of directors to inform themselves of the use of AI by a company – and to ensure that the company uses AI in appropriate circumstances – might constitute a breach of their duty of care.

In considering this question, some insights might be gleaned from the US decision in Brane v Roth, decided just over 30 years ago. Here, the Court of Appeals of Indiana, First District, held that the directors of a grain co-operative had breached their duty of care by failing to inform themselves of the benefits of hedging the price of grain, and by failing to supervise the actions of the manager who was responsible for implementing the hedging arrangements. This has been described informally as a “duty to hedge”.

Although the use of derivatives for hedging purposes may not be completely comparable with the use of AI for corporate governance, the comparison highlights the extent to which the emergence of new technologies brings better ways of doing things, sometimes to the point where the new technologies represent the best or most prudent way.

It is relevant to note that the word “technology” has Greek roots: tekhne (meaning art or craft) and logia (denoting a subject of study or interest), meaning “systematic treatment”. Accordingly, hedging is as much a form of technology as is AI.

RELEVANT FACTORS

If directors are subject to an obligation to use AI in appropriate circumstances, the question then arises as to what those circumstances might be. Likely relevant factors include the AI’s reliability and effectiveness, and the cost, timing and other practicalities of using AI to obtain information or to assist in corporate decision-making. On the assumption that these practical factors can be satisfied, it is likely that other factors will come into play.

Three are considered below:

    1. The company’s sector and its applicable standards and codes;
    2. The availability of AI-related expertise; and
    3. The nature of board decision

These factors are not exhaustive but are likely to be important in determining whether an obligation to use AI might arise.

The sector in which the company operates and applicable standards and codes. Financial services firms, telecommunications companies and professional services firms generally are prime candidates for AI because of the amount of customer data that they hold and the need to ensure its adequate governance. In particular, cyber risks and data breaches are matters of increasing concern for directors and senior management, particularly in professional services companies.

In addition, a relevant question is the applicability of industry standards or codes of conduct and their impact on expectations around governance. As industry-specific corporate codes develop for issues like cybersecurity and the potential use of AI to mitigate the risks, it is likely that codes will inform the content and interpretation of directors’ duties about the use of AI. An additional layer of codes and standards will apply if a company is listed.

The availability of expertise concerning the use of AI. A further factor is the extent to which AI expertise is available to directors and other officers. If such expertise is available at a cost and on a scale that is considered to be reasonable, the question becomes whether such expertise should be located at the board level or the management level, or both.

This brings issues about the extent to which boards will need to understand AI to satisfy their duties, and to supervise not just management but the AI systems themselves. As the use of AI expands and develops, it is likely that the performance of companies will increasingly depend on directors who have adequate knowledge of AI and its functionality.

The nature of board decisions themselves. A third factor to consider is the nature of board decisions and the potential for AI to support them. There are different ways board decisions can be categorised – for example, by reference to the areas that they cover. Four types of decisions can be identified in this regard: decisions relating to human resources; financial decisions; strategy decisions; and governance decisions. Board decisions can also be categorised by reference to the degree of certainty and complexity.

It is likely that different forms of AI technology will be effective for different types of decision-making. It would be prudent for companies that confront a future in AI to maintain a register that plots its potential benefits and applications, to update the register as and when technological innovation occurs, and to use it to predict likely developments. This would operate along similar lines as a risk register, and would track the potentiality of AI by reference to factors including: The type of decision; the degree of their certainty and complexity; relevant standards and codes; and the availability and source of AI expertise within the company.

Although many jurisdictions are examining whether AI can make decisions in place of human directors, and act as a director in its own right, AI does not yet appear able to replace human judgement in board decisions. As a result, human judgement will continue to be an essential component of board-level decision-making.

Knowing when human judgement is necessary – and exercising it appropriately in respect of the use of AI – continues to be of critical importance. However, it is increasingly likely that, in appropriate circumstances, an obligation to use AI will arise as part of directors’ duty of care. It is therefore important for directors to anticipate and make provision for those circumstances.

This article is a shortened version of a chapter that the author has written for a forthcoming book entitled Corporate Law and Governance in the 21st Century: Essays in Honour of Professor Ian Ramsay.

Andrew Godwin 2015
Andrew Godwin

Andrew Godwin is currently a member of a World Bank team that is advising a central bank in Asia on potential reforms to its mandate. He previously practised as a foreign lawyer in Shanghai (1996-2006) before returning to his alma mater, Melbourne Law School in Australia, to teach and research law (2006-2021). Andrew is currently Principal Fellow (Honorary) at the Asian Law Centre, Melbourne Law School, and a consultant to various organisations, including Linklaters, the Australian Law Reform Commission and the World Bank.

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link