AYUSH RAJ AND TANMAY YADAV, FOURTH AND THIRD-YEAR STUDENTS AT GUJARAT NATIONAL LAW UNIVERSITY, GANDHINAGAR
INTRODUCTION
Securities and Exchange Board of India’s (‘SEBI’) February 2025 amendments (Intermediaries (Amendment) Regulations, 2025) inserted Regulation 16C, making any SEBI-regulated entity solely liable for AI/ML tools it uses, whether developed in-house or procured externally. This “sole responsibility” covers data privacy/security, the integrity of artificial intelligence (‘AI’) outputs, and compliance with laws. While this shift rightfully places clear duties on intermediaries, it leaves unaddressed how AI vendors themselves are held to account and how opaque AI systems are audited. In other words, SEBI’s framework robustly binds intermediaries, but contains potential gaps in vendor accountability and system auditability. This critique explores those gaps in light of international standards and practice.
SCOPE OF REGULATION 16C AND ITS LEGAL FRAMEWORK
Regulation 16C was notified on Feb 10, 2025 with immediate effect. In substance, it mirrors SEBI’s November 2024 consultation paper: “every person regulated by SEBI that uses AI…shall be solely responsible” for (a) investor data privacy/security, (b) any output from the AI it relies on, and (c) compliance with applicable laws. The rule applies “irrespective of the scale” of AI adoption, meaning even small or third‑party use triggers full liability. SEBI may enforce sanctions under its general powers for any violation.
This framework operates within SEBI’s established enforcement ecosystem. Violations can trigger the regulator’s full spectrum of penalties under the Securities and Exchange Board of India Act, 1992, ranging from monetary sanctions and cease-and-desist orders to suspension of operations. The regulation thus creates a direct enforcement pathway: any AI-related breach of investor protection, data security, or regulatory compliance automatically becomes a SEBI violation with corresponding penalties.
The legal significance lies in how this shifts risk allocation in the securities ecosystem. Previously, AI-related harms might fall into regulatory grey areas or involve complex questions of vendor versus user responsibility. Regulation 16C eliminates such ambiguity by making intermediaries the single point of accountability, and liability, for all AI deployments in their operations.
VENDOR-ACCOUNTABILITY GAP
In practice intermediaries often rely on third-party models or data, but the regulation places all onus on the intermediary, with no parallel duties imposed on the AI vendor. If a supplier’s model has a hidden flaw or violates data norms, SEBI has no direct rulemaking or enforcement channel against that vendor. Instead, the intermediary must shoulder penalties and investor fallout. This one-sided design could dilute accountability: vendors might disclaim liability in contracts, knowing enforcement power lies with SEBI, not with the provider. As a result, there is a regulatory blind spot whenever AI harms stem from vendor error.
Moreover, industry and global reports warn that relying on a few AI suppliers can create systemic risks. The Bank for International Settlements (BIS) Financial Stability Institute notes that “increased use of third-party services (data providers, AI model providers) could lead to dependency, disruption of critical services and lack of control,” exacerbated by vendor lock-in and market concentration. In other words, heavy dependence on external AI technologies can amplify risk: if one vendor fails, many intermediaries suffer concurrently. The US Treasury likewise highlighted the so‑called “vendor lock-in” problem in financial AI, urging regulators to require vendors to enable easy transitions between competing systems. SEBI’s framework currently lacks any mechanism to counteract lock‑in, such as mandated data or model portability requirements that would allow intermediaries to switch between AI providers without losing critical functionality.
The recognition of these risks inherently places a responsibility on intermediaries to secure strong contractual controls with AI suppliers. This requires regulated entities to perform thorough due diligence and establish back-to-back arrangements with AI vendors to mitigate risk. Such agreements must include provisions like audit rights, data access, and vendor warranties. However, because explicit legal requirements are absent, the onus falls entirely on intermediaries to negotiate these terms. A failure to do so means SEBI’s liability framework itself provides no enforcement of vendor-side transparency.
In practice, this gap means an intermediary could satisfy SEBI’s rule on paper (having liability assigned), yet still face failures or disputes with no legal recourse beyond its own contract. The regulator’s approach is asymmetrical: intermediaries have all the incentives to comply, while vendors have none. SEBI’s choice to rely on intermediaries may have been pragmatic, but it is a potential weakness if vendors operate without accountability.
Consider an AI-driven trading recommendation system supplied by Vendor X. If X’s model generates a flawed recommendation that causes losses, Regulation 16C makes the brokerage (user) fully liable. Yet Vendor X could escape sanction if it sold the software “as is.” Under OECD principles, both the user and the supplier are expected to manage risk cooperatively, but SEBI’s text does not reflect that partnership.
The foregoing points suggest that SEBI may need to clarify how vendor risks are handled. Potential solutions could include: explicitly requiring intermediaries to contractually compel vendor compliance and audit access, or even extending regulatory standards to cover AI vendors serving Indian markets.
AUDABILITY AND TRANSPARENCY OF AI SYSTEMS
A related issue is auditability. Even if intermediaries are liable, regulators must be able to verify how AI systems operate. However, modern AI, especially complex Machine Learning (ML) and generative models, can be “black boxes.” If SEBI cannot inspect the model’s logic or data flows, apportioning entire liability to an intermediary could be problematic.
Regulators worldwide emphasize that AI systems must be transparent and traceable. The OECD’s AI Principles state that actors should ensure “traceability … of datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry”. Similarly, a UK financial‑services review emphasizes that auditability “refers to the ability of an AI system to be evaluated and assessed, an AI system should not be a ‘black box’”. In practical terms, auditability means maintaining logs of data inputs, model versions, decision rationales, and changes to algorithms, so that an independent reviewer can reconstruct how a given outcome was reached.
SEBI’s 16C does not itself mandate audit trails or explain ability measures. It only requires the intermediary to take responsibility for the output. There is no explicit requirement for intermediaries (or their vendors) to preserve model logs or allow regulator inspection. Without such provisions, enforcement of output accuracy or compliance with laws is hampered. For example, if an AI-generated trade signal caused a regulatory breach, SEBI (or a forensic auditor) needs access to the system’s internals to determine why.
Industry guidance suggests that firms should make auditability a contractual requirement when procuring AI. This could involve specifications on data retention, explainability reports, and independent testing. In the SEBI context, best practice would be for intermediaries to demand from AI providers any data necessary for SEBI audits.
In essence, two main concerns arise that are closely interconnected. BIS notes that “limits to the explainability of certain complex AI models can result in risk management challenges, as well as lesser … supervisory insight into the build-up of systemic risks“. If AI outcomes cannot be easily audited, SEBI risks being unable to verify compliance, and lacking explicit audit provisions, regulators and investors may lack confidence in the system’s integrity. Additionally, without mandated audit provisions, firms may neglect this in vendor agreements, though the operational reality for firms should be to include audit clauses and perform due diligence. SEBI should consider guidance or rules requiring regulated entities to ensure audit rights over AI models, just as banks must under banking third-party rules.
CONCLUSION
SEBI’s insertion of Regulation 16C is a welcome and necessary move: it recognises that AI is now mission-critical in securities markets and rightly puts regulated entities on notice that AI outputs and data practices are not outside regulatory reach. Yet the regulation, as drafted, addresses only one side of a multi-party governance problem. Making intermediaries the default legal backstop without parallel obligations on vendors or explicit auditability requirements risks creating enforcement illusions, liability on paper that is difficult to verify or remediate in practice.
To make the policy effective, SEBI should close the symmetry gap between users and suppliers and make AI systems practically observable. At a minimum this means clarifying the standard of liability, requiring intermediaries to retain model and data audit trails, and mandating contractual safeguards (audit rights, model-version logs, notification of material model changes, and portability requirements). If SEBI couples its clear allocation of responsibility with enforceable transparency and vendor-accountability mechanisms, it will have moved beyond a paper rule to a practical framework that preserves market integrity while enabling safe AI adoption.








