BY A.S. VAMSI KRISHNA AND SWAGAT AHUJA, THIRD-YEAR STUDENTS AT RGNUL, PUNJAB
Introduction
Recently, there has been a surge of interest in the notion of artificial intelligence (‘AI’) based platforms taking over the role of an arbitrator. This interest stems from their potential to provide a swift and cost-effective alternative to the traditional arbitration process, particularly in light of the fact that over 31 million disputes are currently pending before Indian courts. However, legal and ethical concerns have been raised regarding the use of AI-arbitrators due to their susceptibility to incorporating biases stemming from racial, cultural, religious, and geographic stereotypes into their assessments. For instance, COMPAS, an AI-based risk assessment tool implemented in American judicial systems to predict the likelihood of a defendant becoming a repeat offender, was found to exhibit significant bias. The program inaccurately categorised African American offenders as high risk nearly half of the time, raising serious concerns about its fairness and reliability. These concerns are firmly supported by both the UNCITRAL and the Indian Arbitration & Conciliation Act, which recognise challenges against a biased arbitrator. Consequently, the question arises as to whether the inherent biases of an AI-arbitrator render it an unusable proposition, even if the parties involved consent to their usage. The answer is not so straightforward, since a blanket ban on the use of AI-arbitration would infringe upon the freedom of the parties to opt for their preferred arbitration process, even if it involves a fully-AI based arbitrator at its helm.
The authors argue that a balance must be found between preserving the parties’ freedom to submit their dispute to the arbitrator of their choice against the consideration of potential costs incurred should there be mass-unenforceability of AI-generated awards in courts. To achieve this balance, the authors suggest that AI-arbitration agreements should only be allowed where the parties involved fully understand the biases inherent in the AI-arbitration and provide informed consent accordingly. Further, this article delves into potential measures that could be implemented within a regulatory framework to ensure that parties are capable of providing informed consent to an AI arbitration agreement.
The Need for Informed Consent
In the context of AI-arbitration, informed consent must be understood to be fulfilled when the concerned parties have access to sufficient and accurate information regarding the AI’s inherent biases and predisposition prior to entering into the agreement. The need for informed consent is countered by some, who argue that the higher threshold of consent in selecting an AI-arbitrator imposes an unnecessary burden, especially when compared to their human counterparts who, despite sharing susceptibility to inherent biases, are exempt from such requirements. In response, the authors contend that equating AI-arbitrators with their human counterparts would be misguided, since the manner of the manifestation of their biases and the scale of their operations are not comparable. AI in general is in its nascent phase and as such, the methods of investigating inherent biases it may possess are complicated and time-taking. Expecting both parties to be sufficiently knowledgeable on the basis of their own investigations is a big ask and would often lead to challenges at the time of enforcement. This is certainly true in the cases of consumer facing contracts where an AI-arbitrator is employed by banks, large constructors and other large-scale operators, where the opposing party often consists of individuals who lack sufficient resources to investigate the AI-arbitrator thoroughly. Additionally, an AI-arbitrator differs from humans in that the multitude of cases it can resolve is much greater than a single human. This means that any bias challenged by a party in one case would lead to a domino effect, toppling every case it has previously dealt with on the basis of a single challenge. In light of these factors, the element of informed consent is a vital consideration with respect to AI-based arbitration.
Integrating Informed Consent into AI-Arbitration Platforms
To ensure that the parties involved are capable of providing informed consent, the regulatory framework must be such that the potential AI-arbitration platforms are examined thoroughly and any inherent predispositions undertaken by the AI is disclosed. However, as previously discussed, the complexity of investigating such AIs means that the regulation in place cannot evolve an independent statutory body which evaluates each AI and certifies it as this would increase the cost and resources of the state. Thus, the authors propose that the potential AI arbitrator should be made open-source, meaning the model whether a language learning model or otherwise on which the AI operates should be publicly accessible to ensure transparency. An open-source AI offers significant benefits, as complete access to its inner workings allows experts worldwide to examine it and identify potential loopholes, emulating a peer-review process of the code. This approach has proven effective, evidenced by the case of ChatGPT, an open-source AI. It was discovered to have inherent biases in favour of a more liberal stance, revealed only by the efforts of a group of experts who carefully examined the code. An open-source AI is preferable to a confidential program since it would reduce arbitration costs for the parties by mitigating the need for private investigations of the code in each case. Additionally, it would be more efficient for the arbitration platform, which would otherwise need to protect the secrecy of its code through rigorous Non-Disclosure Agreements and other tedious privacy measures before permitting parties access for investigation. Therefore, an open-source approach is a win-win situation for the parties involved and the arbitration platform. It ensures transparency regarding any inherent biases or errors within the AI and eliminates the need for parties to conduct an ex-ante investigation for each case.
Furthermore, an additional element to informed consent is to ensure that the variant of the AI which both parties have tested and agreed is the AI which actually acts as the arbitrator. This factor is relevant in light of the capability of AI to be altered post-agreement due to new updates. For example, if both parties provide consent to appoint an AI variant developed in 2024 as their arbitrator, any subsequent updates to this variant could potentially alter the AI’s assessment of the case. This would create the possibility of challenges during enforcement. Thus, to prevent enforcement errors, AI-arbitration agreements must specify the variant which they choose to act as arbitrator and the AI-arbitration platform must possess the capability to assess disputes as per their previous variants as well. Alternatively, the parties can reach an agreement to consent to the use of the specific AI along with the updates therein to ensure that no claim regarding lack of informed consent leads to challenges during enforcement.
Conclusion
AI-based arbitration promises low-cost, expedited dispute resolution, a prospect that is so enticing that it has drawn investments to bring the idea to reality. While parties have the right to opt for AI arbitration as their preferred process, ensuring fairness and equitability is necessary to ensure that the awards so produced are not vulnerable to challenge in courts. It is in the interest of future AI-arbitration platforms to provide parties with the ability to give informed consent to ensure successful enforcement of awards.
Further, regulators face a daunting task since they always have to catch-up with the lightning-fast development of AI. Falling behind is not an option, since a defect in the AI-program could impact not just one, but potentially all the awards it has produced. The authors suggest a pre-authorisation process for AI-arbitration platforms, mandating open and peer-reviewed programs to uncover potential biases. This is since AI’s complex nature means that understanding and revealing its potential biases can only be done by a small group of experts and so cannot be expected to be understood by all potential users. Peer review allows for a breakdown of complex information into simple intelligible data that can be understood by the parties before agreeing. Should there be no regulation, the state risks having its people consent to arrangements for which it is not moral to hold them accountable since they are unable to give informed consent. Hence, it is a justified regulatory filter to require AIs to pass certain tests before being released into the market.