Article

Pausing AI Development

April 14, 2023
AI letters surrounded by digital assets

Share

Last March over 1,800 experts in the field of artificial intelligence (AI) signed an open letter calling for a six-month pause to AI development. This was prompted by the release of Chat GPT-4 last year, OpenAI’s spectacularly popular chatbot, which ignited a race between rival AI labs. The letter says AI development is now out of control and that AI systems should only be developed when its effects are known to be positive. It’s a rare event indeed when leading technologists publicly come together to advocate a freeze in the development of technology.

They are not the only ones urging caution in the development of AI. In a February 2023 article in The Economist, a publication, Effy Vayena, a professor of health ethics at ETH Zurich, and Andrew Morris, a director of Health Data Research UK, a scientific institute, argue that a clear regime is badly needed to regulate AI in healthcare and the risks and liabilities which are generated by AI.

Fundamentally, AI is simply a computer software programmed to execute algorithms to achieve certain defined tasks, such as, for example, reaching conclusions, making informed judgements, predicting future behaviours or automating select repetitive functions. An algorithm amounts to a set of codes with instructions designed to perform those specific tasks.

The call by the 1,800 experts was addressed to AI labs, failing which, they argued, governments should step in and put in place a moratorium. Asked who should write the rules of AI, Chat GPT-4 said it did not have “personal opinions, beliefs, or biases” on the matter, but went on to say that a diverse range of stakeholders should be consulted, including representatives from relevant areas such as “law, ethics and policy”, among others.

Some of the legal considerations raised by the development of AI are significant, as can be seen in the following two examples.

Limitations of Liability. A limitation of liability provision is a standard clause in most contracts. The parties agree to limit their respective liability to the other by putting an upper limit on their potential liability. Importantly, the contractual relationship between the vendor and a hospital is not symmetrical, insofar as the vendor, as the ‘doer’ between the two parties, is typically more exposed and is therefore more concerned about unlimited liability. The vendor brings risks to the hospital, whose main responsibility is to pay the contractual amount owing for the services.

In  most contracts, so-called direct damages – those which would be foreseeable to the contracting parties, acting reasonably, at the time of contract execution – are typically included within the liability being covered, while indirect damages – those which are not foreseeable – are typically excluded, with the result that the aggrieved party remains exposed for the difference between the two (save for any insurance it may have contracted for). Such conceptual short-hands help the contracting parties to avoid having to anticipate and negotiate all the risks each may be exposed to, thereby saving a significant amount of time. It is not always clear, however, what risks are included/excluded within each class. Things often become more murkier where AI is involved.

For example, let’s say, through no fault of the vendor, the vendor’s AI data analytics system inadvertently discloses confidential and/or personal information belonging to the vendor’s other customers and the hospitals is met with a third-party data breach claim. Is this a direct damage or an indirect damage? It’s often unclear. With any AI system the negotiations over limitations of liability require extra attention, usually with some creativity to get to a fair and balanced outcome for the contracting parties.

Data. By definition, AI systems depend on large quantities of data, mainly because the more data is available to the AI system, the more sophisticated, accurate and reliable the system’s outcomes are likely to be. As such, with many AI systems, the vendor will attempt to routinely accumulate and aggregate other customer’s data for the benefit of follow-on customers, thus increasingly the value to the latter. How will the contracting parties then negotiate data ownership and use terms?

Aside from obtaining previous customer consents for the accumulation and aggregation of data, protection of confidentiality looms large with AI, particularly in the healthcare sector where patient information may be involved. Here, privacy and cybersecurity considerations become important considerations for both parties, but especially for the hospital.

In their appeal for greater regulation of AI in healthcare and the legal liabilities that AI generates, Vayena and Morris advocate the passage of a dynamic legislative framework that can keep pace with ongoing technological developments. In their view, this should be supported by adaptable governance structures to clarify the legal responsibilities of businesses in the early stages of development of AI systems.

The authors see an urgent need to coordinate expertise internationally to fill what they call the ‘governance vacuum’. Inspired by the high level of global coordination during the pandemic (at least within some regions), the authors imagine building on existing global architectures in the health sector. They also consider that new business and investment models are needed between private-sector companies and hospitals. This, they say, will require a high level of transparency and public accountability. It will also require a great deal of flexibility on the part of regulators, who will need to be re-assured that patients’ interests are fully protected and that relevant legal rights and responsibilities have been suitably addressed.