BY SACHIN DUBEY AND AJITESH SRIVASTAVA, THIRD-YEAR STUDENTS AT NLU, ODISHA AND LLOYD LAW COLLEGE
INTRODUCTION
Artificial Intelligence (‘AI’) has become an integral part of our daily lives, influencing everything from smart home technology to cutting-edge medical diagnostics. However, it’s most profound influence is perhaps in transforming the landscape of securities market. AI has advanced the efficiency of investor services and compliance operations. This integration empowers stakeholders to make well-informed decisions, playing a pivotal role in market analysis, stock selection, investment planning, and portfolio management for their chosen securities.
However, despite the advantages, AI poses risks such as algorithmic bias from biased data, lack of transparency in models, cybersecurity threats, and ethical concerns like job displacement and misuse, highlighting the need for strong regulatory oversight. Therefore, Securities and Exchange Board of India (‘SEBI’) vide consultation paper dated 13thNovember, 2024 proposed amendments holding regulated entities (‘REs’) accountable for the use of AI and machine learning (‘ML’) tools.
These amendments enable SEBI to take action in the event of any shortcomings in the use of AI/ML systems. SEBI emphasises that these entities are required to safeguard data privacy, be accountable for actions derived from AI outputs, and fulfil their fiduciary responsibility towards investor data, while ensuring compliance with applicable laws.
In this article, the author emphasises the necessity of the proposed amendments while simultaneously highlighting their potential drawbacks.
NEED OF THE PROPOSED AMENDMENTS
The need for proposing amendments holding REs accountable for AI/ML usage has arisen due to various risks associated with its usage.
AI relies heavily on customer inputs and datasets fed into them for arriving at its output. The problem is that humans have found it very difficult to understand or explain how AI arrives at its output. This is widely referred to as “black box problem”. In designing machine learning algorithms, programmers set the goals the algorithm needs to achieve but do not prescribe the exact steps it should follow to solve the problem. Instead, the algorithm creates its own model by learning dynamically from the given data, analysing inputs, and integrating new information to address the problem. This opacity surrounding explainibility of AI outputs raises concerns about accountability for AI-generated outcomes within the legal field.
Further, if just one element in a dataset changes, it can cause the AI to learn and process information differently, potentially leading to outcomes that deviate from the intended use case. Data may contain inherent biases that reinforce flawed decision-making or include inaccuracies that lead the algorithm to underestimate the probability of rare yet significant events. This may lead to jeopardising the interests of customers and promoting discriminatory user biases.
Additionally, relying on large datasets for AI functionality poses considerable risks to privacy and confidentiality. AI models may sometimes be trained on datasets containing customers’ private information or insider data. In such situations, it becomes crucial to establish accountability for breaches of privacy and confidentiality.
SHORTCOMINGS
SEBI’s proposal to amend regulations and assign responsibility for the use of AI and machine learning by REs is well-intentioned. However, it could create challenges for both regulated entities and industry players, potentially slowing down the adoption of AI and stifling innovation.
a. Firstly, SEBI’s proposal to assign responsibility for AI usage adopts a uniform, one-size-fits-all regulatory approach, which may ultimately hinder technological innovation. Effective AI regulation requires greater flexibility, favouring a risk-based framework. This approach classifies AI systems based on their risk levels and applies tailored regulatory measures according to the associated risks. A notable example is the European Union’s AI Act which adopts a proportionate, risk-based approach to AI regulation. This framework introduces a graduated system of requirements and obligations based on the level of risk an AI system poses to health, safety, and fundamental rights. The Act classifies risks into four distinct categories- unacceptable risks, high risks, limited risks and minimal risks. As per the classification, certain AI practices which come under the category of unacceptable risks are completely prohibited while others have been allowed to continue with obligations imposed upon them to ensure transparency.
b. Secondly, while SEBI’s regulatory oversight of AI usage by REs is crucial for protecting investor interests, it is equally important to establish an internal management body to oversee the adoption and implementation of AI within these entities. SEBI could draw insights from the International Organization of Securities Commission’s (‘IOSCO’) final report on AI and machine learning in market intermediaries and asset management. The report recommends that regulated entities designate senior management to oversee AI/ML development, deployment, monitoring, and controls. It also advocates for a documented governance framework with clear accountability and assigning a qualified senior individual or team to approve initial deployments and major updates, potentially aligning this role with existing technology or data oversight.
c. Thirdly, SEBI has entirely placed the responsibility for AI and machine learning usage on REs, neglecting to define the accountability of external stakeholders or third-party providers. REs significantly rely on third parties for AI/ML technologies to ensure smooth operations. Hence, it is vital to clearly outline the responsibilities of these third parties within the AI value chain.
d. Fourthly, the Asia Securities Industry & Financial Markets Association (‘ASIFMA’) raised a concern that financial institutions should not be held responsible for client decisions based on AI-generated outputs. It contends that it would be unjustified to hold institutions liable when an AI tool provides precise information, but the client subsequently makes an independent decision. This viewpoint goes against SEBI’s proposed amendments which seemingly endorses broader institutional liability.
e. Lastly, SEBI’s proposed amendments and existing regulations remain silent on the standards or requirements for the data sets (input data) utilized by AI/ML systems to carry out their functions. While the amendments imply that REs must ensure AI models are trained using data sets that either do not require consent (e.g., publicly available data) or have obtained appropriate consent, particularly under the Digital Personal Data Protection Act, 2023 (DPDPA), SEBI could have more explicitly define the standards for high-quality data sets suitable for AI/ML functionality particularly crucial when the data protection rules have not seen the light of the day.
CONCLUSION
While it is commendable that SEBI, recognizing the growing use of AI/ML tools in the financial sector, proposed amendments to hold REs accountable for their usage, it should have given due consideration to the factors mentioned above. Because it is vital to ensure that any policy introduced is crafted carefully in a way that does not, in any way, discourage innovation and growth in the emerging fields of AI and ML technology.


Leave a reply to 684sdf342 Cancel reply