The Corporate & Commercial Law Society Blog, HNLU

Tag: ai

  • The Digital Dilemma: Reimagining Independent Directors’ Liability under Companies Act, 2013

    The Digital Dilemma: Reimagining Independent Directors’ Liability under Companies Act, 2013

    BY SVASTIKA KHANDELWAL, THIRD- YEAR STUDENT AT NLSIU, BANGALORE

    INTRODUCTION

    The 2025 breach compromising the personal data of 8.4 million users of Zoomcar underscores the growing prevalence of digital risks within corporate governance. Such incidents raise pressing concerns regarding the oversight obligations of boards, particularly independent directors (‘IDs‘), and call for a critical examination of S.149(12), Companies Act, 2013 (‘the Act’), which limits ID liability to instances where acts of omission or commission by a company occurs with their knowledge, attributable through board processes and with their consent or connivance, or where they have not acted diligently.

    This piece argues that S.149(12) has not kept pace with the digital transformation of corporate operations and requires legislative reform to account for the dual challenges of digitalisation: the increasing integration of digital communication in corporate operations, and its growing impact on digital corporate governance failures like data breaches and cybersecurity lapses.

    Firstly, the piece traces the evolution of the IDs’ liability regime. Further, it examines the knowledge and consent test under the first part of S.149(12), arguing it fails to address accountability challenges in the digital-era. Subsequently, it analyses the diligence test as a more appropriate standard for ensuring meaningful oversight.  Finally, the article explores how S.149(12) can be expanded to effectively tackle the liability of IDs for digital governance failures.

    UNDERSTANDING S.149(12) OF THE ACT: SCOPE AND DEVELOPMENT

    In India, the emergence of ID has evolved in response to its ‘insider model’ of corporate shareholding, where promoter-driven concentrated ownership resulted in tensions between the majority and minority shareholders. This necessitated safeguards for minority shareholders and independent oversight of management. Before the 2013 Act, the duties of directors were shaped by general fiduciary principles rooted in common law. This lacked the specificity to address the majority-minority shareholder conflict effectively. A regulatory milestone came when SEBI introduced Clause 49, Listing Agreement 2000, requiring listed companies to appoint IDs. However, it offered limited guidance on the functions and stakeholder interests these directors were expected to protect. A more detailed approach was followed in the 2013 Act, which explicitly defined the role of IDs in S.149(6), S.149(12), and Schedule IV. This marked a transition from treating IDs as general fiduciaries to assigning them a more distinct role. IDs facilitate information symmetry and unbiased decision-making. Furthermore, they are essential for raising concerns about unethical behaviour or breaches of the company’s code of conduct. Significantly, they must safeguard the interests of all stakeholders, especially minority shareholders. By staying independent and objective, they help the board make informed decisions.

    This article focuses on S.149(12) of the Act, which contains two grounds for holding IDs liable. First, if the company’s actions occurred with the ID’s knowledge and consent or connivance, provided such knowledge must be linked to board processes. Secondly, liability arises due to the lack of diligence. Since the provision uses “or,” both grounds function independently; failing either can attract liability. While knowledge must relate to board proceedings, the duty of diligence extends beyond this. It is an autonomous and proactive duty, not confined to board discussions.

    REASSESSING THE KNOWLEDGE AND CONSENT TEST

    The piece argues that S.149(12)’s knowledge and consent standard is inadequate in the context of digital governance, where risks emerge rapidly and information is frequently acquired through digital channels.

    Firstly, courts have tended to apply S.149(12) narrowly, often solely focusing on the knowledge and consent test. They fail to go a step further to assess the duty of diligence. This incomplete approach weakens accountability and overlooks a key aspect of the provision. This narrow interpretation was evident in  Global Infratech, where the IDs were cleared of liability due to insufficient evidence indicating their participation in board proceedings. Interestingly, while SEBI held executive directors to a standard of diligence and caution, it imposed no such obligation on IDs. The decision emphasised that an ID can escape liability solely on the ground of not having knowledge acquired through board processes, without demonstrating that he exercised diligence by actively seeking relevant information. A similar restricted interpretation was evident in the Karvy decision, where SEBI absolved IDs of liability as they had not been informed of ongoing violations in board meetings, without addressing their duty to proactively seek such information through due diligence.

    Further concern arises from the judiciary’s conflation of the knowledge test with involvement in day-to-day functioning. In MPS Infotecnics and Swam Software, IDs were not held liable because they were not involved in the day-to-day affairs of the company. This finding was grounded in the belief that the ID lacked knowledge of the wrongdoing. Such a reasoning exposes a critical flaw in the knowledge test, which lies in treating an ID’s absence from daily affairs as proof that they were unaware of any misconduct, thereby diluting the ID’s duty to exercise informed oversight over core strategic decisions and high‑risk domains, including cybersecurity.

    This interpretation is especially problematic in view of digital governance failures. Various grave catastrophic corporate risks like data breaches and ransomware attacks arise from routine technological processes. Storing user data, updating software, and managing cybersecurity are daily activities that are central to a company’s operations and survival. The “day-to-day functioning” standard creates a perilous loophole. It allows an ID to escape liability by remaining willfully ignorant of the company’s most critical area of risk. An ID can simply claim they lacked “knowledge” of a cybersecurity flaw because it was part of “day-to-day” IT work. Thus, this piece argues that the judiciary’s narrow reading of S.149(12), which applies only the knowledge test, is inadequate in the digital domain. IDs need not be technology experts. Still, they must ask the right questions, identify red flags and ensure appropriate governance mechanisms are in place, including cybersecurity, thus reinforcing the need to apply the diligence test more robustly.

    Another shortcoming of this test is its over-reliance on attributing ID’s knowledge only to matters in formal board processes. In the digital era, this approach overlooks the reality that board decision-making and oversight increasingly occur outside the confines of scheduled meetings. The integration of real-time digital communication channels such as Gmail and WhatsApp highlights crucial gaps. It creates an evidentiary vacuum, since highly probative indications of negligence, like the dismissal of a whistleblower’s alert or a decision to ignore a cybersecurity risk, may be discussed within informal digital communications. Limiting knowledge to board meetings enables plausible deniability. IDs may engage in and even influence critical decisions through private digital channels, omit these discussions from the official record, and later easily escape liability under the knowledge standard, despite having complete awareness of the wrongdoing. Cyber crises unfold without warning, long before the next board meeting is convened. Their rapidity and opacity require IDs to act through digital channels. The exclusion of these communications from the liability framework offers an easy shield from responsibility.

    Compounding this issue, the requirement of “consent or connivance” fails to capture digital corporate environment nuances. Consent is no longer limited to clear, documented paper trails, but is often expressed by various digital cues in businesses. A “thumbs up” emoji in a WhatsApp group could signal agreement, acknowledgement, or simply receipt, therefore giving IDs room to deny intent and escape liability. This problem is exacerbated by end-to-end encryption and disappearing messages features on some instant-messaging applications. It allows erasing potential evidence. Moreover, connivance or covert cooperation can now take subtler digital forms, like an ID editing a cloud-sharing Google Document, replacing “imminent risk” with “need routine system check” in an audit report, intentionally downplaying a serious breach warning. The current wording of the provision is silent on whether this would make an ID accountable.

    Therefore, it is evident that the knowledge and consent test is insufficient in the face of pervasive digitalisation and warrants a wider interpretation in light of the foregoing developments in corporate operations.

    THE DILIGENCE TEST: A STRONGER STANDARD

    While ID liability has often been confined to the narrow ‘knowledge test,’ SEBI’s order in Manpasand Beverages Ltd. reasserts the importance of diligence. On 30 April 2024, SEBI held the company’s IDs responsible, noting that although they claimed a lack of access to vital documents, they made no effort to obtain them. This ruling signals a renewed commitment to holding directors accountable beyond mere knowledge.

    This is beneficial in the context of digital governance failures, as the diligence test provides a stronger framework for ensuring accountability; it imposes an obligation on IDs, as highlighted in Edserv Soft systems, where it was observed that due diligence requires questioning irregular transactions and following up persistently with uncooperative management. The Bombay Dyeing case held that IDs in audit committees are expected to question the presented information and actively uncover irregularities, even if deliberately hidden. It emphasised that IDs must question accuracy and demand clarity without relying solely on surface-level disclosures. The same heightened duty must apply to digital governance, where concealed cyber risks like breaches or ransomware pose equally serious threats and require equally proactive investigation.

    Therefore, the diligence test is more effective for tackling digital corporate governance failures as it replaces passive awareness with active oversight. Since these digital threats often remain hidden until too late, waiting for information is insufficient. It is not a tool for operational meddling but for high-level strategic scrutiny, like questioning a cybersecurity budget marked below industry benchmarks for a data-intensive organisation.

    CONCLUSION: CHARTING THE WAY FORWARD

    As shown, S.149(12) of the Act, in its current form, appears ill-equipped to tackle the realities of digital corporate governance failures. This concern may be addressed through an evolved interpretation of the existing framework, potentially supplemented by a clarificatory Explanation to S.149(12), specifically tailored to digital threats.

     A logical starting point for this evolution is a broader reading of “knowledge.” It can be expanded to include not only information attributable to formal board meetings but also any material information communicated to, or reasonably accessible by, the ID through any mode, including digital means. Additionally, a rebuttable presumption of “consent or connivance” can be inserted where IDs, after gaining such knowledge, fail to record objection or dissent within a reasonable time, especially when the matter involves a material risk to the company or a breach of law. This approach does not set a high threshold; it merely shifts the onus and strengthens timely oversight, encouraging IDs to speak up. Given the potential severity of cyberattacks, such an approach aligns with the need for heightened vigilance in digital governance.

    Further, the timeless duty of due diligence may be interpreted to include a baseline level of digital literacy. While they need not be technology professionals, they must understand enough to ask relevant questions and assess whether management has adequately addressed digital risks. Without this foundational competence, IDs cannot meaningfully engage with cybersecurity, data governance, etc, leaving oversight dangerously superficial.  Embedding this requirement under S.149(12) makes it a statutory duty, ensuring that failure to acquire or apply such skills can directly trigger liability. In the modern corporate landscape, technology is not optional; rather, essential and enduring. Therefore, IDs must be equipped to fulfil their duties in this environment.  

  • Contesting The ‘Big Tech’ Tag: India’s Digital Competition Bill At A Turning Point

    Contesting The ‘Big Tech’ Tag: India’s Digital Competition Bill At A Turning Point

    BY UJJWAL GUPTA AND BHAVISHYA GOSWAMI, SECOND- YEAR STUDENTS AT RMLNLU, LUCKNOW

    INTRODUCTION

    With India’s digital economy being nearly five times more productive than the rest of the economy, technological​‍​‌‍​‍‌​‍​‌‍​‍‌ companies have become central economic actors of a rapidly digitalising India, which prompted the need for a digital competition law to prevent the build-up of market power before it materialises. The Digital Competition Bill, 2024 (‘DCB’), aims at introducing ex-ante oversight to ensure competition in digital markets, thus complementing the already existing ex-post regime under the Competition Act, 2002. The DCB envisages a regime to identify Systemically Significant Digital Enterprises (‘SSDE’) and to impose conduct obligations on them.

    However, the draft has sparked discussion about whether its design manages to achieve the proper balance between restraining potential gatekeepers and protecting the growth of India’s tech ecosystem. While industry players and policy-makers generally agree on the necessity to control highly concentrated digital power, they are still worried that this tag may negatively affect rapidly growing Indian companies. The emerging proposal to allow companies to contest their SSDE designation reflects this balance-seeking approach. It indicates that the balance between protecting competition and giving the regulated entities fair treatment is not lost, i.e. the control does not hamper the innovation, investment, and the rise of domestic digital ​‍​‌‍​‍‌​‍​‌‍​companies.

    The SSDE DESIGNATION DEBATE

    One​‍​‌‍​‍‌​‍​‌‍​‍‌ of the key ideas of the DCB is SSDEs, which are entities that, due to their scale, reach, or market interlinkages, require ex-ante regulatory oversight. Under section 3 of the draft Bill, a company may be designated as an SSDE if it meets certain financial and user-based criteria. For example, a turnover in India of ₹4000 crore, global market capitalisation of USD 75 billion, or at least one crore end users. Besides, the Competition Commission of India (‘CCI’) can also identify an enterprise as an SSDE, even if it does not meet these quantitative criteria, by using qualitative factors like network effects, market dependence, or data-driven advantages. This allows the CCI to take preventive measures by identifying “gatekeepers” before their dominance becomes monopoly power.

    However, the Parliamentary Standing Committee and industry associations have pointed out that India’s comparatively low user threshold (one crore end users) might inadvertently prematurely rope in rapidly growing domestic firms, like Zomato or Paytm, that are still in the process of consolidating their market positions. By equating India’s digital scale with that of smaller Western markets, the Bill could act as a silent killer of innovation, deterring investment and freezing the entrepreneurial spirit. The concern is that the Bill’s broad definition of “systemic significance” could lead to a growth penalty and disincentivize the very growth India seeks to encourage under its “Digital India” and “Startup India” programs.

    Globally, the DCB draws clear inspiration from the European Union’s Digital Markets Act, 2022 (‘DMA’) and the UK’s Digital Markets, Competition and Consumers Act, 2024 (‘DMCC’). Each of their aims is to control the gatekeeping power of big tech companies. However, the implementation of the measures varies. The DMA is limited to ten defined “core platform services”, and it has already identified seven gatekeepers: Alphabet, Amazon, Apple, Booking, Byte Dance, Meta, and Microsoft. Moreover, it permits rebuttals under exceptional circumstances, a measure that is not in the current draft DCB. The DMCC creates the concept of “strategic market status” for dominant firms and thus puts more focus on tailor-made conduct rules. As per Schedule I, the draft DCB identifies nine “Core Digital Services”, similar to the DMA, excluding “virtual assistants”, and introduces “Associate Digital Enterprises”, defined under section 2(2), an Indian innovation to ensure group-level accountability.

    III. The Case for a Rebuttal Mechanism

    As established earlier, a ‍​‌‍​‍‌major concern of technology firms about the DCB is the lack of a mechanism to challenge a designation as an SSDE. These firms see such a designation as bringing problems of high compliance costs and of reputational risk to them, thus potentially labelling them as monopolistic even before any wrongdoing is established.

    The Twenty-Fifth Report of the Standing Committee on Finance recognised this problem. It stated that the current proposal has no provision for rebutting the presumption of designation based on quantitative thresholds, i.e., the Committee suggested referring to Article 3(5) of the DMA by implementing a “rebuttal mechanism in exceptional cases”. This would allow companies that meet or exceed quantitative criteria to demonstrate that they do not possess the qualitative features of gatekeepers, such as entrenched dominance or cross-market leveraging.

    Article 3(5) of the DMA is a good example in this case. Under it, companies can show “sufficiently substantiated arguments” which “manifestly call into question” their presumed gatekeeper status. In ByteDance v. Commission, the General Court of the European Union set a high standard for the issue and demanded that the companies bring overwhelming evidence and not mere technical objections. Firms like Apple, Meta, and Byte Dance have used this provision as a ground to challenge their identification; however, the evidentiary burden is still significant, and market investigations go on despite the fact that compliance with obligations is expected within six months after designation. Yet, the EU’s model illustrates that a rebuttal does not weaken enforcement; rather, it enhances it by allowing for flexibility in rapidly changing markets without compromising the regulator’s intention.

    The implementation of a similar mechanism in India would be beneficial in several ways. It would enhance the predictability of regulation and discouraging the over-designation of large but competitive firms, and also send a signal of institutional maturity consistent with international standards. In this context, the Centre is reportedly considering the introduction of an appeal mechanism that would allow firms to contest their designation after a market study on the digital sector is completed. However, the government still needs to deal with the possible disadvantages, such as the delay of enforcement against dominant players, the procedural burden on the CCI and the risk of strategic litigation by well-funded ​‍​‌‍​‍‌​‍​‌‍​‍‌corporations.

    IV. Dynamic vs. Fixed Metrics: Rethinking ‘Big Tech’

    The biggest challenge in DCB lies in the criteria for identifying SSDE as choosing between fixed quantitative metrics and dynamic qualitative assessments will shape administrative efficiency and long-term success. DCB follows primarily fixed metrics based on the DMA , having fixed quantitative criteria such as valuation or turnover for SSDE designation.

    The biggest advantage of fixed metrics is its speed and legal certainty. It becomes very simple vis-à-vis the administrative screening process when one has clear numerical boundaries, which then allows CCI to quickly identify the potential firms that pose competitive risks. However, this approach has attracted a lot of criticism. Industry stakeholders opine that the thresholds in DCB are “too low” and oversimplistic in the wage of a unique economic context and population scale of India.

    Another limitation is the risk of arbitrariness; if the benchmark were solely based on numerical terms, it could disconnect from the regulatory framework in finding a genuine entrenched competitive harm. For instance, in a market as large as India, having a high user database may only reflect the successful scaling and effective service delivery rather than having the real ability to act as an unchallengeable bottleneck. This challenge, where restriction is just imposed because a firm is successful irrespective of conserving if that firm has demonstrated any specific harmful market power, has led to a widespread demand that SSDEs forms should be allowed to contest this designation, and this tag should be revoked if they prove not to be harmful in the competitive or entrenched market power.

    On the other hand, the dynamic criteria are recognised in the DMCC, where the firm must possess ‘substantial and entrenched market power’. Through this, the UK regime can put conduct requirements based on qualitative and contextual market analysis, rather than quantitative analysis. However, its effective application requires resources vis-à-vis institutional capacity and legal justification while imposing terms on powerful firms.

    The dynamic criteria have been recognised by the CCI itself and provided a roadmap, which highlights the challenges arising out of the structural control that the big players have across the entire AI value chain and AI ecosystems, especially the control over data, computing resources, and models. The definition of the “significant presence” shall expand beyond turnover and should incorporate the firm’s control over the proprietary and high-quality resources, such as high-end infrastructure.

    V. The Road Ahead: Regulation without Stifling Growth

    The DCB will have a significant responsibility to manage the compliance needs of such a large country in its evolving shape. For that, the government is considering the establishment of a dedicated Digital Markets Unit within the CCI. It will be responsible for communicating with industry, academia, regulators, government, and other stakeholders, and facilitating cross-divisional discussions. It will avoid any structural damage caused by delays in the above-mentioned things.

    Yet another challenge is the very limited capacity of Indian regulators compared to other jurisdictions, which leads to the execution of prescriptive and technically complex regulations being extremely challenging. This deficiency in terms of specialised economists, data scientists, and technology lawyers would be the deciding factor in this fast-changing world, and India needs to cope with this as soon as possible.

    India’s number one priority is job creation through rapid growth, so that we can achieve sufficient wealth for all age groups. In the present scenario, policy experts have criticized the DCB, saying that it is “anti-bigness and anti-successful firms” that discourage Indian firms from expanding globally. Therefore, the DCB should maintain a balance that gives a fillip to competitiveness in the market while upholding the digital scale and innovation of one’s country.

    The DCB overlaps with the recently implemented amendments to the Competition Act, 2002. The Competition (Amendment) Act, 2023, has introduced the Deal Value Threshold, which makes it compulsory for any merger and acquisition that exceeds INR 20 billion to be notified prior. The problem would be the friction between the conduct control that the DCB would govern through its conduct rules and prohibitions, and structural control, because the mergers and acquisitions are subject to DVT clearance under the Competition (Amendment) Act.

    This dual scrutiny increases the legal complexity and transactional costs. Thus, if the proposed Digital Markets Unit under DCB lacks clear guidelines as to harmonise the existing inconsistencies between the conduct requirements and merger clearance conditions. This would lead to nothing but slowing down essential acquisitions imperative for scaling of the firm, and would contradict the overall aim of promoting efficient market dynamics.

  • SEBI’s AI Liability Regulation: Accountability and Auditability Concerns

    SEBI’s AI Liability Regulation: Accountability and Auditability Concerns

    AYUSH RAJ AND TANMAY YADAV, FOURTH AND THIRD-YEAR STUDENTS AT GUJARAT NATIONAL LAW UNIVERSITY, GANDHINAGAR

    INTRODUCTION

    Securities and Exchange Board of India’s (‘SEBI’) February 2025 amendments (Intermediaries (Amendment) Regulations, 2025) inserted Regulation 16C, making any SEBI-regulated entity solely liable for AI/ML tools it uses, whether developed in-house or procured externally. This “sole responsibility” covers data privacy/security, the integrity of artificial intelligence (‘AI’) outputs, and compliance with laws. While this shift rightfully places clear duties on intermediaries, it leaves unaddressed how AI vendors themselves are held to account and how opaque AI systems are audited. In other words, SEBI’s framework robustly binds intermediaries, but contains potential gaps in vendor accountability and system auditability. This critique explores those gaps in light of international standards and practice.

    SCOPE OF REGULATION 16C AND ITS LEGAL FRAMEWORK

    Regulation 16C was notified on Feb 10, 2025 with immediate effect. In substance, it mirrors SEBI’s November 2024 consultation paper: “every person regulated by SEBI that uses AI…shall be solely responsible” for (a) investor data privacy/security, (b) any output from the AI it relies on, and (c) compliance with applicable laws. The rule applies “irrespective of the scale” of AI adoption, meaning even small or third‑party use triggers full liability. SEBI may enforce sanctions under its general powers for any violation.

    This framework operates within SEBI’s established enforcement ecosystem. Violations can trigger the regulator’s full spectrum of penalties under the Securities and Exchange Board of India Act, 1992, ranging from monetary sanctions and cease-and-desist orders to suspension of operations. The regulation thus creates a direct enforcement pathway: any AI-related breach of investor protection, data security, or regulatory compliance automatically becomes a SEBI violation with corresponding penalties.

    The legal significance lies in how this shifts risk allocation in the securities ecosystem. Previously, AI-related harms might fall into regulatory grey areas or involve complex questions of vendor versus user responsibility. Regulation 16C eliminates such ambiguity by making intermediaries the single point of accountability, and liability, for all AI deployments in their operations.

    VENDOR-ACCOUNTABILITY GAP

    In practice intermediaries often rely on third-party models or data, but the regulation places all onus on the intermediary, with no parallel duties imposed on the AI vendor. If a supplier’s model has a hidden flaw or violates data norms, SEBI has no direct rulemaking or enforcement channel against that vendor. Instead, the intermediary must shoulder penalties and investor fallout. This one-sided design could dilute accountability: vendors might disclaim liability in contracts, knowing enforcement power lies with SEBI, not with the provider. As a result, there is a regulatory blind spot whenever AI harms stem from vendor error.

    Moreover, industry and global reports warn that relying on a few AI suppliers can create systemic risks. The Bank for International Settlements (BIS) Financial Stability Institute notes that “increased use of third-party services (data providers, AI model providers) could lead to dependency, disruption of critical services and lack of control,” exacerbated by vendor lock-in and market concentration. In other words, heavy dependence on external AI technologies can amplify risk: if one vendor fails, many intermediaries suffer concurrently. The US Treasury likewise highlighted the so‑called “vendor lock-in” problem in financial AI, urging regulators to require vendors to enable easy transitions between competing systems. SEBI’s framework currently lacks any mechanism to counteract lock‑in, such as mandated data or model portability requirements that would allow intermediaries to switch between AI providers without losing critical functionality.

    The recognition of these risks inherently places a responsibility on intermediaries to secure strong contractual controls with AI suppliers. This requires regulated entities to perform thorough due diligence and establish back-to-back arrangements with AI vendors to mitigate risk. Such agreements must include provisions like audit rights, data access, and vendor warranties. However, because explicit legal requirements are absent, the onus falls entirely on intermediaries to negotiate these terms. A failure to do so means SEBI’s liability framework itself provides no enforcement of vendor-side transparency.

    In practice, this gap means an intermediary could satisfy SEBI’s rule on paper (having liability assigned), yet still face failures or disputes with no legal recourse beyond its own contract. The regulator’s approach is asymmetrical: intermediaries have all the incentives to comply, while vendors have none. SEBI’s choice to rely on intermediaries may have been pragmatic, but it is a potential weakness if vendors operate without accountability.

    Consider an AI-driven trading recommendation system supplied by Vendor X. If X’s model generates a flawed recommendation that causes losses, Regulation 16C makes the brokerage (user) fully liable. Yet Vendor X could escape sanction if it sold the software “as is.” Under OECD principles, both the user and the supplier are expected to manage risk cooperatively, but SEBI’s text does not reflect that partnership.

    The foregoing points suggest that SEBI may need to clarify how vendor risks are handled. Potential solutions could include: explicitly requiring intermediaries to contractually compel vendor compliance and audit access, or even extending regulatory standards to cover AI vendors serving Indian markets.

    AUDABILITY AND TRANSPARENCY OF AI SYSTEMS

    A related issue is auditability. Even if intermediaries are liable, regulators must be able to verify how AI systems operate. However, modern AI, especially complex Machine Learning (ML) and generative models, can be “black boxes.” If SEBI cannot inspect the model’s logic or data flows, apportioning entire liability to an intermediary could be problematic.

    Regulators worldwide emphasize that AI systems must be transparent and traceable. The OECD’s AI Principles state that actors should ensure “traceability … of datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry”. Similarly, a UK financial‑services review emphasizes that auditability “refers to the ability of an AI system to be evaluated and assessed, an AI system should not be a ‘black box’”. In practical terms, auditability means maintaining logs of data inputs, model versions, decision rationales, and changes to algorithms, so that an independent reviewer can reconstruct how a given outcome was reached.

    SEBI’s 16C does not itself mandate audit trails or explain ability measures. It only requires the intermediary to take responsibility for the output. There is no explicit requirement for intermediaries (or their vendors) to preserve model logs or allow regulator inspection. Without such provisions, enforcement of output accuracy or compliance with laws is hampered. For example, if an AI-generated trade signal caused a regulatory breach, SEBI (or a forensic auditor) needs access to the system’s internals to determine why.

    Industry guidance suggests that firms should make auditability a contractual requirement when procuring AI. This could involve specifications on data retention, explainability reports, and independent testing. In the SEBI context, best practice would be for intermediaries to demand from AI providers any data necessary for SEBI audits.

    In essence, two main concerns arise that are closely interconnected. BIS notes that “limits to the explainability of certain complex AI models can result in risk management challenges, as well as lesser … supervisory insight into the build-up of systemic risks“. If AI outcomes cannot be easily audited, SEBI risks being unable to verify compliance, and lacking explicit audit provisions, regulators and investors may lack confidence in the system’s integrity. Additionally, without mandated audit provisions, firms may neglect this in vendor agreements, though the operational reality for firms should be to include audit clauses and perform due diligence. SEBI should consider guidance or rules requiring regulated entities to ensure audit rights over AI models, just as banks must under banking third-party rules.

    CONCLUSION

    SEBI’s insertion of Regulation 16C is a welcome and necessary move: it recognises that AI is now mission-critical in securities markets and rightly puts regulated entities on notice that AI outputs and data practices are not outside regulatory reach. Yet the regulation, as drafted, addresses only one side of a multi-party governance problem. Making intermediaries the default legal backstop without parallel obligations on vendors or explicit auditability requirements risks creating enforcement illusions, liability on paper that is difficult to verify or remediate in practice.

    To make the policy effective, SEBI should close the symmetry gap between users and suppliers and make AI systems practically observable. At a minimum this means clarifying the standard of liability, requiring intermediaries to retain model and data audit trails, and mandating contractual safeguards (audit rights, model-version logs, notification of material model changes, and portability requirements). If SEBI couples its clear allocation of responsibility with enforceable transparency and vendor-accountability mechanisms, it will have moved beyond a paper rule to a practical framework that preserves market integrity while enabling safe AI adoption.

  • Digital Competition Bill: Complementing or Competing with the Competition Act?

    Digital Competition Bill: Complementing or Competing with the Competition Act?

    BY Winnie Bhat, SECOND- YEAR STUDENT AT NALSAR, HYDERABAD
    Introduction

    Data is the oil that fuels the engine of the digital world. The economic value and competitive significance of data accumulation for companies in the digital age cannot be overstated. It is in recognition of this synergy between competition and data privacy laws, that the Competition Commission of India (‘CCI’) has imposed a fine of Rs 213 crore on Meta, the parent company of WhatsApp, for abusing its dominant market position under Section 4 of the Competition Act, 2002 (‘CA’).

    As digital markets evolve, so too must the legal frameworks that regulate them. This article considers whether the proposed Digital Competition Bill, 2024 (‘DCB’) enhances the current competition regime or risks undermining it through regulatory overlap. In doing so, it assesses how traditional competition tools have been stretched to meet new challenges and whether a shift toward an ex-ante model is necessary and prudent.

    Reliance on Competition Act, 2002

    In the absence of a dedicated digital competition framework, Indian regulators have increasingly relied on the CA to address issues of market concentration, data-driven dominance, and unfair terms imposed by Big Tech firms. One of the clearest examples of this reliance is the CCI’s scrutiny of WhatsApp’s 2021 privacy policy. In the present case, CCI found that WhatsApp’s 2021 privacy policy which mandated sharing of users’ data with WhatsApp and thereafter its subsequent sharing with Facebook vitiated the ‘free’, ‘optional’ and ‘well-informed’ consent of users as WhatsApp’s dominant position in the market coupled with network and tipping effects effectively left users with no real or practical choice but to accept its unfair terms.

    This contrasts with the CCI’s previous stances in Vinod Kumar Gupta v WhatsApp and Harshita Chawla v WhatsApp & Facebook, where it declined to intervene because data privacy violation did not impact competition. However, in a slew of progressive developments, a market study by CCI has now recognized privacy as a non-price competition factor and the Supreme Court’s nod in 2022 for CCI to continue investigation in the Meta-WhatsApp mater has effectively granted CCI the jurisdiction to deal with issues relating to privacy that have an adverse effect on competition.

    The facts of this case very closely resemble that of Bundeskartellamt v Facebook Inc.,2019 wherein the German competition regulator had flagged Facebook for imposing one sided terms about tracking users’ activity in the social networking market where consent was reduced to a mere formality. Both cases illustrate how dominant digital platforms exploit their market power to impose unfair terms on users, effectively bypassing meaningful consent. This pattern reflects a deeper structural issue—where existing competition law, focused on ex-post remedies, is used to address the unique challenges of digital markets. It is precisely this regulatory gap that the proposed DCB seeks to fill through its ex-ante approach.

    Abuse of dominant positions by Big Tech companies in the digital era occurs in more subtle ways as the price of these services is paid for with users’ personal data. A unilateral modification in the data privacy policy leaves users vulnerable as they have little bargaining power against established corporate behemoths. These companies collect huge chunks of “Big data” by taking advantage of their dominance in one relevant market (in the present case, the instant messaging market) and use them in other relevant markets (social networking, personalized advertising, etc.) which gives them a significant edge against their competitors. This creates entry barriers and a disproportionate share of the market goes to a few large corporations resulting in monopoly-like conditions.

    To deal with such issues, competition law first identifies a corporation’s dominant position in the market. Once this is established, it investigates the factors that lead to the abuse of this position. Here, the factor is collection of data which invades the privacy of users without their free and informed consent. The CCI, in its ruling against Meta, held WhatsApp to be in violation of Sections 4(2)(a)(i), 4(2)(c) and 4(2)(e) of the CA, dealing with imposition of unfair conditions in purchase of service, engagement in practices resulting in denial of market access and use of dominant position in one market to secure its market position in another relevant market respectively.

    The Digital Competition Bill, 2024

    The proposed Digital Competition Bill, 2024  when enacted, would signify a landmark shift in how India approaches competition regulation in digital markets. Unlike the CA, which operates on an ex-post basis; acting upon violations after analysing their effects, the DCB introduces a proactive approach that seeks to regulate the conduct of Systemically Significant Digital Enterprises (‘SSDEs’) through an ex-ante framework. SSDEs are large digital enterprises that enjoy a position of entrenched market power and serve as critical intermediaries between businesses and users. The DCB aims to curb their ability to engage in self-preferencing, data misuse, and other exclusionary practices before harm occurs, rather than waiting for evidence of anti-competitive outcomes. While this progressive approach aims to address the unique challenges posed by the dominance of digital giants, it also raises critical concerns about legislative overlap, disproportionate penalties on corporations and potential legal uncertainty.

    A key issue with the coexistence of the DCB and the CA is the overlap in their regulatory scopes. The CA, particularly through Section 4, targets abuse of dominance through a detailed effects-based inquiry. As evidenced in the CCI’s ruling against WhatsApp, a compromise or breach of data privacy of the users will not be tolerated and has the potential to be considered as a means of abuse of an enterprise’s dominant position. By contrast, the DCB imposes predetermined obligations on SSDEs, which are deemed to have significant market power. Section 12 of the DCB prescribes certain limitations on the use of personal data of the users of SSDEs, whereas Section 16 grants the CCI the power to inquire into non-compliance if a prima facie case is made out, regardless of the effects such non-compliance may have on competition.

    Concerns about dual enforcement

    This duality creates an ambiguity. For instance, should a prima facie case involving data misuse by an SSDE, which unfairly elevates its market position, be assessed under the CA’s abuse of dominance provisions, or should it fall exclusively within the purview of the DCB? The risk of dual penalties further compounds these challenges. Section 28 (1) of the DCB empowers the CCI to impose significant fines (not exceeding 10% of its global turnover) on SSDEs for non-compliance with its obligations. However, under Section 48 of the CA, these entities are also subject to penalties for engaging in anti-competitive behaviour that may stem from the same act of data misuse.

    Although, the protection against double jeopardy only applies to criminal cases, the spirit of double jeopardy is clearly visible in this case, wherein businesses could face disproportionate punishments for overlapping offenses, raising concerns about fairness and proportionality. This mirrors similar concerns in the European Union, where the Digital Markets Act (‘DMA’) (India’s DCB is modelled on EU’s DMA) and Articles 101 and 102 of the Treaty on the Functioning of the European Union (traditional EU competition law provisions) operate in tandem. However, EU’s DMA grants the European Commission overriding powers over the nations’ competition regulating authorities, which brings unique challenges and is not applicable in India since the regulating authority (CCI) oversees implementation of both the CA and DCB. This vests the CCI with considerable discretion in deciding which act takes precedence and their spheres of regulation. The MCA report leaves potential overlaps in proceedings to be resolved by the CCI on an ad hoc basis. Therefore, statutory clarity on the application of the DCB and the CA are essential to avoid inconsistency in outcomes.

    The Way Forward

    To address these challenges, India must focus on creating a harmonious regulatory framework. Moreover, a Digital Markets Coordination Council could be established to harmonize enforcement actions, share data, and resolve jurisdictional disputes. Such a body could include representatives from the CCI, the Ministry of Electronics and Information Technology (MeitY), and independent technical experts to ensure holistic oversight.

    Proportional penalties are another area for reform. Lawmakers should ensure that corporations do not have to bear the burden of being punished in two different ways for the same offence. Introducing a standardised penalty framework across the DCB and CA would prevent over-penalisation and ensure fairness.

    Since the DCB has not been enacted yet, India can pre-empt these concerns of overlap and ensure that the CA and DCB complement rather than compete with each other. The exact scope of a solution to these concerns is beyond the scope of this article, but by learning from the EU’s experiences and adopting a coordinated, balanced approach, India can create a regulatory framework that promotes innovation, safeguards competition, and protects consumers’ rights and interests in the digital age.