Cybersecurity, Data Protection, and Artificial Intelligence - Chapter 20 - College of Commercial Arbitrators Guide to Best Practices in Commercial Arbitration - Fifth Edition
Originally from The College of Commercial Arbitrators Guide to Best Practices in Commercial Arbitration, Fifth Edition
PAGE PREVIEW
I. INTRODUCTION
This chapter offers guidance for managing risks inherent in the largely digital practice environments in which commercial arbitration takes place today. These risks include the exposure of digital systems, devices, and data to unauthorized third parties, including cybercriminals (cybersecurity threats), the unauthorized processing or disclosure of personally identifying information or “PII” (data protection concerns), and risks arising from new technology tools, including the rapidly proliferating use of generative artificial intelligence (“AI”) tools in commercial arbitration. Effective management of these risks requires arbitrators to be attuned to the risks that can arise from their own day-to-day use of technology (“baseline” risk management) as well as from the arbitration matters in which they serve (“case-specific” risk management).
Despite the challenges posed by constant and rapid technological changes to this risk landscape, arbitrators can take comfort in the key takeaways of this chapter: (1) familiar principles such as competence, confidentiality, integrity, and party autonomy drive both basic practice management and arbitration case procedures; (2) prevailing ethical, institutional, and regulatory frameworks require that arbitrators take reasonable steps to mitigate risks, not that they guarantee zero risk; (3) reasonable baseline cybersecurity and data protection practices include readily accessible measures that can be adopted regardless of practice setting or infrastructure and without undue burden or expense; and (4) well-developed resources (some of which developed first in international arbitration, but that should be equally helpful for improving security in domestic arbitration) equip arbitrators with a principled approach and helpful checklists for assessing and mitigating cybersecurity and data protection risks, including many of the risks that arise in connection with new and innovative technology like generative AI.
II. THE MEANING OF CYBERSECURITY AND RELATED CONCEPTS
The term “cybersecurity” refers to the measures that protect digital networks, systems, devices, and data used or generated in the arbitration process from unauthorized access, disclosure, or disruption, whether due to inadvertent human error or malicious attack.
“Information security” is a term that is often used interchangeably with cybersecurity, but it is also a term that more broadly encompasses physical/environmental security and non-electronic information as well as data breach notification procedures and other incident response measures. Although many, if not most, arbitration documents, including pre-hearing information exchange, submissions to arbitral tribunals, exhibits, hearing transcripts, and correspondence with administering institutions, are transmitted and stored primarily or exclusively in electronic form these days, it is equally important to safeguard non-electronic aspects of the arbitration process, including the privacy of hearing and break-out rooms and the security of hard copy materials like courier packages and an arbitrator’s notes and study materials.
“Data protection” refers to how information that could be used to identify an individual is handled. Data protection laws and regulations vary across jurisdictions, including on the definition and scope of the information and activities they regulate. These requirements generally are mandatory and may trigger civil and/or criminal liability for those who control or process that information. The “control” or “processing” of data typically is defined very broadly in applicable laws and regulations and encompass all arbitral participants, including arbitrators.
As detailed further below, regulated information is usually referred to as “personally identifying information” (“PII”) in the United States, and may include sensitive and non-sensitive data, as well as data that can be used to identify an individual directly or indirectly. There is no single legal definition of PII, however, because it is described and classified differently across a patchwork of industry and sector-based federal and state laws. Another common term for this type of information is “personal data,” which has an established meaning under the European Union’s General Data Protection Regulation (“GDPR”) and encompasses a broader range of data and associated “processing” activities than US laws and regulations.