The Corporate & Commercial Law Society Blog, HNLU

Tag: Artificial intelligence

  • The Digital Dilemma: Reimagining Independent Directors’ Liability under Companies Act, 2013

    The Digital Dilemma: Reimagining Independent Directors’ Liability under Companies Act, 2013

    BY SVASTIKA KHANDELWAL, THIRD- YEAR STUDENT AT NLSIU, BANGALORE

    INTRODUCTION

    The 2025 breach compromising the personal data of 8.4 million users of Zoomcar underscores the growing prevalence of digital risks within corporate governance. Such incidents raise pressing concerns regarding the oversight obligations of boards, particularly independent directors (‘IDs‘), and call for a critical examination of S.149(12), Companies Act, 2013 (‘the Act’), which limits ID liability to instances where acts of omission or commission by a company occurs with their knowledge, attributable through board processes and with their consent or connivance, or where they have not acted diligently.

    This piece argues that S.149(12) has not kept pace with the digital transformation of corporate operations and requires legislative reform to account for the dual challenges of digitalisation: the increasing integration of digital communication in corporate operations, and its growing impact on digital corporate governance failures like data breaches and cybersecurity lapses.

    Firstly, the piece traces the evolution of the IDs’ liability regime. Further, it examines the knowledge and consent test under the first part of S.149(12), arguing it fails to address accountability challenges in the digital-era. Subsequently, it analyses the diligence test as a more appropriate standard for ensuring meaningful oversight.  Finally, the article explores how S.149(12) can be expanded to effectively tackle the liability of IDs for digital governance failures.

    UNDERSTANDING S.149(12) OF THE ACT: SCOPE AND DEVELOPMENT

    In India, the emergence of ID has evolved in response to its ‘insider model’ of corporate shareholding, where promoter-driven concentrated ownership resulted in tensions between the majority and minority shareholders. This necessitated safeguards for minority shareholders and independent oversight of management. Before the 2013 Act, the duties of directors were shaped by general fiduciary principles rooted in common law. This lacked the specificity to address the majority-minority shareholder conflict effectively. A regulatory milestone came when SEBI introduced Clause 49, Listing Agreement 2000, requiring listed companies to appoint IDs. However, it offered limited guidance on the functions and stakeholder interests these directors were expected to protect. A more detailed approach was followed in the 2013 Act, which explicitly defined the role of IDs in S.149(6), S.149(12), and Schedule IV. This marked a transition from treating IDs as general fiduciaries to assigning them a more distinct role. IDs facilitate information symmetry and unbiased decision-making. Furthermore, they are essential for raising concerns about unethical behaviour or breaches of the company’s code of conduct. Significantly, they must safeguard the interests of all stakeholders, especially minority shareholders. By staying independent and objective, they help the board make informed decisions.

    This article focuses on S.149(12) of the Act, which contains two grounds for holding IDs liable. First, if the company’s actions occurred with the ID’s knowledge and consent or connivance, provided such knowledge must be linked to board processes. Secondly, liability arises due to the lack of diligence. Since the provision uses “or,” both grounds function independently; failing either can attract liability. While knowledge must relate to board proceedings, the duty of diligence extends beyond this. It is an autonomous and proactive duty, not confined to board discussions.

    REASSESSING THE KNOWLEDGE AND CONSENT TEST

    The piece argues that S.149(12)’s knowledge and consent standard is inadequate in the context of digital governance, where risks emerge rapidly and information is frequently acquired through digital channels.

    Firstly, courts have tended to apply S.149(12) narrowly, often solely focusing on the knowledge and consent test. They fail to go a step further to assess the duty of diligence. This incomplete approach weakens accountability and overlooks a key aspect of the provision. This narrow interpretation was evident in  Global Infratech, where the IDs were cleared of liability due to insufficient evidence indicating their participation in board proceedings. Interestingly, while SEBI held executive directors to a standard of diligence and caution, it imposed no such obligation on IDs. The decision emphasised that an ID can escape liability solely on the ground of not having knowledge acquired through board processes, without demonstrating that he exercised diligence by actively seeking relevant information. A similar restricted interpretation was evident in the Karvy decision, where SEBI absolved IDs of liability as they had not been informed of ongoing violations in board meetings, without addressing their duty to proactively seek such information through due diligence.

    Further concern arises from the judiciary’s conflation of the knowledge test with involvement in day-to-day functioning. In MPS Infotecnics and Swam Software, IDs were not held liable because they were not involved in the day-to-day affairs of the company. This finding was grounded in the belief that the ID lacked knowledge of the wrongdoing. Such a reasoning exposes a critical flaw in the knowledge test, which lies in treating an ID’s absence from daily affairs as proof that they were unaware of any misconduct, thereby diluting the ID’s duty to exercise informed oversight over core strategic decisions and high‑risk domains, including cybersecurity.

    This interpretation is especially problematic in view of digital governance failures. Various grave catastrophic corporate risks like data breaches and ransomware attacks arise from routine technological processes. Storing user data, updating software, and managing cybersecurity are daily activities that are central to a company’s operations and survival. The “day-to-day functioning” standard creates a perilous loophole. It allows an ID to escape liability by remaining willfully ignorant of the company’s most critical area of risk. An ID can simply claim they lacked “knowledge” of a cybersecurity flaw because it was part of “day-to-day” IT work. Thus, this piece argues that the judiciary’s narrow reading of S.149(12), which applies only the knowledge test, is inadequate in the digital domain. IDs need not be technology experts. Still, they must ask the right questions, identify red flags and ensure appropriate governance mechanisms are in place, including cybersecurity, thus reinforcing the need to apply the diligence test more robustly.

    Another shortcoming of this test is its over-reliance on attributing ID’s knowledge only to matters in formal board processes. In the digital era, this approach overlooks the reality that board decision-making and oversight increasingly occur outside the confines of scheduled meetings. The integration of real-time digital communication channels such as Gmail and WhatsApp highlights crucial gaps. It creates an evidentiary vacuum, since highly probative indications of negligence, like the dismissal of a whistleblower’s alert or a decision to ignore a cybersecurity risk, may be discussed within informal digital communications. Limiting knowledge to board meetings enables plausible deniability. IDs may engage in and even influence critical decisions through private digital channels, omit these discussions from the official record, and later easily escape liability under the knowledge standard, despite having complete awareness of the wrongdoing. Cyber crises unfold without warning, long before the next board meeting is convened. Their rapidity and opacity require IDs to act through digital channels. The exclusion of these communications from the liability framework offers an easy shield from responsibility.

    Compounding this issue, the requirement of “consent or connivance” fails to capture digital corporate environment nuances. Consent is no longer limited to clear, documented paper trails, but is often expressed by various digital cues in businesses. A “thumbs up” emoji in a WhatsApp group could signal agreement, acknowledgement, or simply receipt, therefore giving IDs room to deny intent and escape liability. This problem is exacerbated by end-to-end encryption and disappearing messages features on some instant-messaging applications. It allows erasing potential evidence. Moreover, connivance or covert cooperation can now take subtler digital forms, like an ID editing a cloud-sharing Google Document, replacing “imminent risk” with “need routine system check” in an audit report, intentionally downplaying a serious breach warning. The current wording of the provision is silent on whether this would make an ID accountable.

    Therefore, it is evident that the knowledge and consent test is insufficient in the face of pervasive digitalisation and warrants a wider interpretation in light of the foregoing developments in corporate operations.

    THE DILIGENCE TEST: A STRONGER STANDARD

    While ID liability has often been confined to the narrow ‘knowledge test,’ SEBI’s order in Manpasand Beverages Ltd. reasserts the importance of diligence. On 30 April 2024, SEBI held the company’s IDs responsible, noting that although they claimed a lack of access to vital documents, they made no effort to obtain them. This ruling signals a renewed commitment to holding directors accountable beyond mere knowledge.

    This is beneficial in the context of digital governance failures, as the diligence test provides a stronger framework for ensuring accountability; it imposes an obligation on IDs, as highlighted in Edserv Soft systems, where it was observed that due diligence requires questioning irregular transactions and following up persistently with uncooperative management. The Bombay Dyeing case held that IDs in audit committees are expected to question the presented information and actively uncover irregularities, even if deliberately hidden. It emphasised that IDs must question accuracy and demand clarity without relying solely on surface-level disclosures. The same heightened duty must apply to digital governance, where concealed cyber risks like breaches or ransomware pose equally serious threats and require equally proactive investigation.

    Therefore, the diligence test is more effective for tackling digital corporate governance failures as it replaces passive awareness with active oversight. Since these digital threats often remain hidden until too late, waiting for information is insufficient. It is not a tool for operational meddling but for high-level strategic scrutiny, like questioning a cybersecurity budget marked below industry benchmarks for a data-intensive organisation.

    CONCLUSION: CHARTING THE WAY FORWARD

    As shown, S.149(12) of the Act, in its current form, appears ill-equipped to tackle the realities of digital corporate governance failures. This concern may be addressed through an evolved interpretation of the existing framework, potentially supplemented by a clarificatory Explanation to S.149(12), specifically tailored to digital threats.

     A logical starting point for this evolution is a broader reading of “knowledge.” It can be expanded to include not only information attributable to formal board meetings but also any material information communicated to, or reasonably accessible by, the ID through any mode, including digital means. Additionally, a rebuttable presumption of “consent or connivance” can be inserted where IDs, after gaining such knowledge, fail to record objection or dissent within a reasonable time, especially when the matter involves a material risk to the company or a breach of law. This approach does not set a high threshold; it merely shifts the onus and strengthens timely oversight, encouraging IDs to speak up. Given the potential severity of cyberattacks, such an approach aligns with the need for heightened vigilance in digital governance.

    Further, the timeless duty of due diligence may be interpreted to include a baseline level of digital literacy. While they need not be technology professionals, they must understand enough to ask relevant questions and assess whether management has adequately addressed digital risks. Without this foundational competence, IDs cannot meaningfully engage with cybersecurity, data governance, etc, leaving oversight dangerously superficial.  Embedding this requirement under S.149(12) makes it a statutory duty, ensuring that failure to acquire or apply such skills can directly trigger liability. In the modern corporate landscape, technology is not optional; rather, essential and enduring. Therefore, IDs must be equipped to fulfil their duties in this environment.  

  • SEBI’s AI Liability Regulation: Accountability and Auditability Concerns

    SEBI’s AI Liability Regulation: Accountability and Auditability Concerns

    AYUSH RAJ AND TANMAY YADAV, FOURTH AND THIRD-YEAR STUDENTS AT GUJARAT NATIONAL LAW UNIVERSITY, GANDHINAGAR

    INTRODUCTION

    Securities and Exchange Board of India’s (‘SEBI’) February 2025 amendments (Intermediaries (Amendment) Regulations, 2025) inserted Regulation 16C, making any SEBI-regulated entity solely liable for AI/ML tools it uses, whether developed in-house or procured externally. This “sole responsibility” covers data privacy/security, the integrity of artificial intelligence (‘AI’) outputs, and compliance with laws. While this shift rightfully places clear duties on intermediaries, it leaves unaddressed how AI vendors themselves are held to account and how opaque AI systems are audited. In other words, SEBI’s framework robustly binds intermediaries, but contains potential gaps in vendor accountability and system auditability. This critique explores those gaps in light of international standards and practice.

    SCOPE OF REGULATION 16C AND ITS LEGAL FRAMEWORK

    Regulation 16C was notified on Feb 10, 2025 with immediate effect. In substance, it mirrors SEBI’s November 2024 consultation paper: “every person regulated by SEBI that uses AI…shall be solely responsible” for (a) investor data privacy/security, (b) any output from the AI it relies on, and (c) compliance with applicable laws. The rule applies “irrespective of the scale” of AI adoption, meaning even small or third‑party use triggers full liability. SEBI may enforce sanctions under its general powers for any violation.

    This framework operates within SEBI’s established enforcement ecosystem. Violations can trigger the regulator’s full spectrum of penalties under the Securities and Exchange Board of India Act, 1992, ranging from monetary sanctions and cease-and-desist orders to suspension of operations. The regulation thus creates a direct enforcement pathway: any AI-related breach of investor protection, data security, or regulatory compliance automatically becomes a SEBI violation with corresponding penalties.

    The legal significance lies in how this shifts risk allocation in the securities ecosystem. Previously, AI-related harms might fall into regulatory grey areas or involve complex questions of vendor versus user responsibility. Regulation 16C eliminates such ambiguity by making intermediaries the single point of accountability, and liability, for all AI deployments in their operations.

    VENDOR-ACCOUNTABILITY GAP

    In practice intermediaries often rely on third-party models or data, but the regulation places all onus on the intermediary, with no parallel duties imposed on the AI vendor. If a supplier’s model has a hidden flaw or violates data norms, SEBI has no direct rulemaking or enforcement channel against that vendor. Instead, the intermediary must shoulder penalties and investor fallout. This one-sided design could dilute accountability: vendors might disclaim liability in contracts, knowing enforcement power lies with SEBI, not with the provider. As a result, there is a regulatory blind spot whenever AI harms stem from vendor error.

    Moreover, industry and global reports warn that relying on a few AI suppliers can create systemic risks. The Bank for International Settlements (BIS) Financial Stability Institute notes that “increased use of third-party services (data providers, AI model providers) could lead to dependency, disruption of critical services and lack of control,” exacerbated by vendor lock-in and market concentration. In other words, heavy dependence on external AI technologies can amplify risk: if one vendor fails, many intermediaries suffer concurrently. The US Treasury likewise highlighted the so‑called “vendor lock-in” problem in financial AI, urging regulators to require vendors to enable easy transitions between competing systems. SEBI’s framework currently lacks any mechanism to counteract lock‑in, such as mandated data or model portability requirements that would allow intermediaries to switch between AI providers without losing critical functionality.

    The recognition of these risks inherently places a responsibility on intermediaries to secure strong contractual controls with AI suppliers. This requires regulated entities to perform thorough due diligence and establish back-to-back arrangements with AI vendors to mitigate risk. Such agreements must include provisions like audit rights, data access, and vendor warranties. However, because explicit legal requirements are absent, the onus falls entirely on intermediaries to negotiate these terms. A failure to do so means SEBI’s liability framework itself provides no enforcement of vendor-side transparency.

    In practice, this gap means an intermediary could satisfy SEBI’s rule on paper (having liability assigned), yet still face failures or disputes with no legal recourse beyond its own contract. The regulator’s approach is asymmetrical: intermediaries have all the incentives to comply, while vendors have none. SEBI’s choice to rely on intermediaries may have been pragmatic, but it is a potential weakness if vendors operate without accountability.

    Consider an AI-driven trading recommendation system supplied by Vendor X. If X’s model generates a flawed recommendation that causes losses, Regulation 16C makes the brokerage (user) fully liable. Yet Vendor X could escape sanction if it sold the software “as is.” Under OECD principles, both the user and the supplier are expected to manage risk cooperatively, but SEBI’s text does not reflect that partnership.

    The foregoing points suggest that SEBI may need to clarify how vendor risks are handled. Potential solutions could include: explicitly requiring intermediaries to contractually compel vendor compliance and audit access, or even extending regulatory standards to cover AI vendors serving Indian markets.

    AUDABILITY AND TRANSPARENCY OF AI SYSTEMS

    A related issue is auditability. Even if intermediaries are liable, regulators must be able to verify how AI systems operate. However, modern AI, especially complex Machine Learning (ML) and generative models, can be “black boxes.” If SEBI cannot inspect the model’s logic or data flows, apportioning entire liability to an intermediary could be problematic.

    Regulators worldwide emphasize that AI systems must be transparent and traceable. The OECD’s AI Principles state that actors should ensure “traceability … of datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry”. Similarly, a UK financial‑services review emphasizes that auditability “refers to the ability of an AI system to be evaluated and assessed, an AI system should not be a ‘black box’”. In practical terms, auditability means maintaining logs of data inputs, model versions, decision rationales, and changes to algorithms, so that an independent reviewer can reconstruct how a given outcome was reached.

    SEBI’s 16C does not itself mandate audit trails or explain ability measures. It only requires the intermediary to take responsibility for the output. There is no explicit requirement for intermediaries (or their vendors) to preserve model logs or allow regulator inspection. Without such provisions, enforcement of output accuracy or compliance with laws is hampered. For example, if an AI-generated trade signal caused a regulatory breach, SEBI (or a forensic auditor) needs access to the system’s internals to determine why.

    Industry guidance suggests that firms should make auditability a contractual requirement when procuring AI. This could involve specifications on data retention, explainability reports, and independent testing. In the SEBI context, best practice would be for intermediaries to demand from AI providers any data necessary for SEBI audits.

    In essence, two main concerns arise that are closely interconnected. BIS notes that “limits to the explainability of certain complex AI models can result in risk management challenges, as well as lesser … supervisory insight into the build-up of systemic risks“. If AI outcomes cannot be easily audited, SEBI risks being unable to verify compliance, and lacking explicit audit provisions, regulators and investors may lack confidence in the system’s integrity. Additionally, without mandated audit provisions, firms may neglect this in vendor agreements, though the operational reality for firms should be to include audit clauses and perform due diligence. SEBI should consider guidance or rules requiring regulated entities to ensure audit rights over AI models, just as banks must under banking third-party rules.

    CONCLUSION

    SEBI’s insertion of Regulation 16C is a welcome and necessary move: it recognises that AI is now mission-critical in securities markets and rightly puts regulated entities on notice that AI outputs and data practices are not outside regulatory reach. Yet the regulation, as drafted, addresses only one side of a multi-party governance problem. Making intermediaries the default legal backstop without parallel obligations on vendors or explicit auditability requirements risks creating enforcement illusions, liability on paper that is difficult to verify or remediate in practice.

    To make the policy effective, SEBI should close the symmetry gap between users and suppliers and make AI systems practically observable. At a minimum this means clarifying the standard of liability, requiring intermediaries to retain model and data audit trails, and mandating contractual safeguards (audit rights, model-version logs, notification of material model changes, and portability requirements). If SEBI couples its clear allocation of responsibility with enforceable transparency and vendor-accountability mechanisms, it will have moved beyond a paper rule to a practical framework that preserves market integrity while enabling safe AI adoption.