The Corporate & Commercial Law Society Blog, HNLU

Tag: technology

  • Algorithmic Enforcement and Anti-Competitive Effects: CCI vs. Swiggy and Zomato

    Algorithmic Enforcement and Anti-Competitive Effects: CCI vs. Swiggy and Zomato

    BY VASHMATH POTLURI, THIRD-YEAR STUDENT AT NALSAR, HYDERABAD

    INTRODUCTION

    The food delivery market in India has been one of the most dynamic and volatile markets, witnessing the quick exit of players like Uber Eats and Food Panda, among others, while being dominated by Zomato and Swiggy with a whopping market share of 58% and 42%, respectively. While there are many factors for such dominance, the recent allegations of Price Parity Clauses (“PPCs”) and exclusive agreements by the National Restaurants Association of India (“NRAI”) against both these platforms shed some light on the reasons for such market share. The findings of the Director General (“DG”), as reported by Reuters, indicate that the Competition Commission of India (“CCI”) is proceeding against these platforms under section 3(4)(c) of the Competition Act, 2002 (“Act”) based on the presumption that Swiggy and Zomato operate in a vertical framework as intermediaries distinct from their restaurant partners. However, this article challenges this presumption and argues that Swiggy and Zomato’s ownership of cloud kitchens transforms their relationship with restaurants into one of direct competition. As a result, this paper pushes for a reclassification of this case under Section 3(3)(a) and (b), enabling a shift from a ‘rule of reason’ approach to a per se standard. 

    The article advances this argument in a two-fold manner. First, it will analyze the anti-competitive effects of PPCs and exclusivity agreements, particularly in conjunction with Swiggy and Zomato’s cloud kitchens. Second, it will examine the role of dynamic algorithms in furthering these practices, proposing the introduction of the Algorithmic Facilitation Standard (“AFS”) in the Act, to ensure regulatory scrutiny and transparency in the market in line with the approach of the EU. 

    HORIZONTAL PRICE FIXING AND MARKET ALLOCATION

    The allegations by the NRAI that Swiggy and Zomato operate their cloud kitchens and enter into arrangements such as PPCs and exclusivity agreements throw light on the dominance of these platforms through anti-competitive practices. These practices demonstrate that these platforms are not merely intermediaries with restaurants as downstream partners, but competitors operating simultaneously in both the food preparation and delivery markets. This dual role works to the detriment of independent restaurants. 

    In the MakeMyTrip (“MMT-GO”) case, the CCI assessed the anti-competitive effects of wide Price Parity Clauses (“PPCs”) and exclusivity partnerships in a vertical framework between MakeMyTrip, Goibibo, and OYO with their hotel partners. The CCI found that these agreements restricted hotels from offering lower prices or better terms on competing platforms, creating entry barriers and limiting consumer choice. As a result, the CCI held that these agreements resulted in an Appreciable Adverse Effect on Competition (“AAEC”) — a standard under Section 19(3) of the Act, which examines factors such as foreclosure of competition, barriers to entry, and harm to consumer choice. Relying on these findings, this article argues that the anti-competitive practices of Swiggy and Zomato produce identical effects, such as inflated prices and foreclosure of competition, but in a horizontal framework rather than a vertical one. 

    Applying the findings of the MMT-GO on wide PPCs, the PPCs entered into by Swiggy and Zomato are wide because they suppress competition in the market by mandating that restaurants maintain uniform prices across all channels, including their direct platforms and competing delivery services. This eliminates price differentiation and forces restaurants to inflate prices, depriving consumers of competitive pricing or discounts. These clauses also ensure that Swiggy and Zomato’s cloud kitchens are insulated from price competition, as restaurants cannot undercut them even when operating more cost-effectively. On the other hand, exclusivity agreements further suppress competition by restricting restaurants from listing on competing platforms or offering direct delivery services, creating a “lock-in” effect. This limits consumer access to popular restaurants and forecloses rival platforms from competing effectively. 

    These arrangements unfairly establish the dominance of Swiggy and Zomato’s cloud kitchens by allowing them to leverage vast data generated through their platforms. This data provides critical insights into consumer preferences, including popular cuisines, peak ordering times, delivery locations, and pricing trends. Using this information, Swiggy and Zomato can strategically design their cloud kitchen offerings to align with market demand precisely, bypassing the trial-and-error process faced by independent restaurants. They can quickly identify underserved cuisines or delivery zones and establish cloud kitchens to fill these gaps with minimal risk and cost. This data-driven approach grants their cloud kitchens a significant competitive edge over independent restaurants, which lack access to such comprehensive data and must rely on slower, costlier market research methods.

    The combined effects of PPC, exclusivity agreement, and cloud kitchens on a horizontal level, results in the creation of barriers to entry and foreclosure of competition, causing an AAEC under Section 19(3)(a) to (c). Hence, this article argues that the CCI must re-examine this case under Section 3(3)(a) and (b) through a ‘per se’ approach. Taking inspiration from the EU’s Vertical Block Exemption Regulation (“VBER”), which removed wide PPCs from the regulatory exemption, the CCI could impose cease-and-desist orders and monetary penalties, ensuring a competitive marketplace.

    ALGORITHMIC FACILIATATION STANDARD

    Swiggy and Zomato’s algorithms play a crucial role in enforcing PPCs and exclusivity agreements, amplifying their anti-competitive effects. These platforms use algorithms to monitor pricing across various channels, including restaurants’ direct platforms and competing delivery services, ensuring strict compliance with PPCs. By scanning for pricing discrepancies, the algorithms flag instances where restaurants offer lower prices on alternative channels. Non-compliant restaurants face automated penalties, such as reduced visibility in search results or exclusion from promotional campaigns, discouraging price competition. Similarly, these algorithms enforce exclusivity agreements by tracking restaurants’ activities on competing platforms. Exclusive partners receive preferential treatment, such as enhanced visibility, while restaurants breaching exclusivity face reduced exposure, limiting their ability to attract orders.

    Operating as a “black box,” these algorithms lack transparency, leaving restaurants unaware of the reasons for penalties or visibility changes. This creates a unilateral power dynamic that disproportionately favours Swiggy and Zomato, making it difficult for restaurants to challenge or adapt to platform policies.  In this context, the article proposes that the AFS identify the role of such algorithms and bring them under regulatory scrutiny. Under this, the CCI would be required to follow a two-step inquiry-

    MANDATORY ALGORITHMIC DISCLOSURES: 

    The first step in the proposed AFS is to mandate disclosures by Swiggy and Zomato regarding their algorithmic decision-making. These platforms must provide information about the design, operation, and structure of their algorithms, specifically in relation to penalizing or incentivizing restaurants. Such disclosures should be made to the DG under Section 36(4)(b) of the Act during the investigation stage. This requirement mirrors the EU Platform to business regulations 2019/1150, which mandates transparency in ranking criteria, ensuring that platforms do not manipulate search results based on monetary compensation or preferential treatment.

    EFFECTS BASED OUTCOME ANALYSIS:

    The second step shifts the scrutiny from intent to effects, applying an effects-based outcome analysis to assess whether these algorithms control prices, foreclose competition, or limit consumer choice by restricting visibility or promotions. If these practices result in an AAEC, the burden of disproving their anti-competitive impact should shift onto Swiggy and Zomato, allowing the CCI to order a rollback of such algorithms, if necessary. This aligns with the EU Court of Justice’s ruling in the Google Shopping case which found algorithmic self-preferencing anti-competitive, and rejected short-term efficiency arguments as justifications for long-term market harm. Likewise, under Section 19(3)(d) to (f) of the Act, any efficiency claims by Swiggy and Zomato should be dismissed if they come at the expense of competition.

    WAY FORWARD

    This article proposes that the AFS could be incorporated into the Act in two ways. First, under the ‘Hub-and-Spoke’ model, introduced through the Competition Amendment Act, 2023, wherein, a central entity (hub) can facilitate anti-competitive coordination among independent entities (spokes), even if they do not explicitly collude with each other. In this context, Swiggy and Zomato function as hubs, using algorithms to impose price parity and exclusivity conditions on restaurants (spokes), effectively orchestrating market behavior without direct collusion between restaurants. Second, the liability of Swiggy and Zomato could be invoked under Section 2(b), as part of tacit collusion through algorithmic enforcement. Since intent is irrelevant under ‘per se’ approach, the AFS would impute intent constructively, aligning with the Competition Law Review Committee 2019s recommendation of a “guilty until proven otherwise” standard in cases involving algorithmic anti-competitive practices.

    CONCLUSION

    While the case is still pending before the CCI, this article has established that Swiggy and Zomato’s anti-competitive practices produce effects similar to horizontal price fixing and market allocation under Section 3(3)(a) & (b). A reclassification accordingly would enable for a shift to ‘per se’ from ‘rule of reason’, under which the entire burden to prove the anti-competitive effects rests on the complainant, and in such situations where these practices are furthered by opaque algorithms, it becomes difficult to hold Swiggy and Zomato responsible for their actions. Thus, under the AFS, the mere presence of algorithms and assessment of their prima-facie effects after due disclosure to the CCI, the burden to disprove AAEC would be heavy on Swiggy and Zomato. This reclassification would represent a significant jurisprudential shift, setting a precedent for addressing algorithm-driven anti-competitive practices and establishing a framework for future actions against quick commerce platforms.

  • Examining the Flaws in SEBI’s Proposed AI & ML Regulations

    Examining the Flaws in SEBI’s Proposed AI & ML Regulations

    BY SACHIN DUBEY AND AJITESH SRIVASTAVA, THIRD-YEAR STUDENTS AT NLU, ODISHA AND LLOYD LAW COLLEGE

    INTRODUCTION

    Artificial Intelligence (‘AI’) has become an integral part of our daily lives, influencing everything from smart home technology to cutting-edge medical diagnostics. However, it’s most profound influence is perhaps in transforming the landscape of securities market. AI has advanced the efficiency of investor services and compliance operations. This integration empowers stakeholders to make well-informed decisions, playing a pivotal role in market analysis, stock selection, investment planning, and portfolio management for their chosen securities.

    However, despite the advantages, AI poses risks such as algorithmic bias from biased data, lack of transparency in models, cybersecurity threats, and ethical concerns like job displacement and misuse, highlighting the need for strong regulatory oversight. Therefore, Securities and Exchange Board of India (‘SEBI’) vide consultation paper dated 13thNovember, 2024 proposed amendments holding regulated entities (‘REs’) accountable for the use of AI and machine learning (‘ML’) tools.  

    These amendments enable SEBI to take action in the event of any shortcomings in the use of AI/ML systems. SEBI emphasises that these entities are required to safeguard data privacy, be accountable for actions derived from AI outputs, and fulfil their fiduciary responsibility towards investor data, while ensuring compliance with applicable laws.

    In this article, the author emphasises the necessity of the proposed amendments while simultaneously highlighting their potential drawbacks. 

    NEED OF THE PROPOSED AMENDMENTS

    The need for proposing amendments holding REs accountable for AI/ML usage has arisen due to various risks associated with its usage. 

    AI relies heavily on customer inputs and datasets fed into them for arriving at its output. The problem is that humans have found it very difficult to understand or explain how AI arrives at its output. This is widely referred to as “black box problem”. In designing machine learning algorithms, programmers set the goals the algorithm needs to achieve but do not prescribe the exact steps it should follow to solve the problem. Instead, the algorithm creates its own model by learning dynamically from the given data, analysing inputs, and integrating new information to address the problem. This opacity surrounding explainibility of AI outputs raises concerns about accountability for AI-generated outcomes within the legal field.

    Further, if just one element in a dataset changes, it can cause the AI to learn and process information differently, potentially leading to outcomes that deviate from the intended use case. Data may contain inherent biases that reinforce flawed decision-making or include inaccuracies that lead the algorithm to underestimate the probability of rare yet significant events. This may lead to jeopardising the interests of customers and promoting discriminatory user biases. 

    Additionally, relying on large datasets for AI functionality poses considerable risks to privacy and confidentiality. AI models may sometimes be trained on datasets containing customers’ private information or insider data. In such situations, it becomes crucial to establish accountability for breaches of privacy and confidentiality. 

    SHORTCOMINGS

    SEBI’s proposal to amend regulations and assign responsibility for the use of AI and machine learning by REs is well-intentioned. However, it could create challenges for both regulated entities and industry players, potentially slowing down the adoption of AI and stifling innovation.

    a. Firstly, SEBI’s proposal to assign responsibility for AI usage adopts a uniform, one-size-fits-all regulatory approach, which may ultimately hinder technological innovation. Effective AI regulation requires greater flexibility, favouring a risk-based framework. This approach classifies AI systems based on their risk levels and applies tailored regulatory measures according to the associated risks. A notable example is the European Union’s AI Act which adopts a proportionate, risk-based approach to AI regulation. This framework introduces a graduated system of requirements and obligations based on the level of risk an AI system poses to health, safety, and fundamental rights. The Act classifies risks into four distinct categories- unacceptable risks, high risks, limited risks and minimal risks. As per the classification, certain AI practices which come under the category of unacceptable risks are completely prohibited while others have been allowed to continue with obligations imposed upon them to ensure transparency.  

    b. Secondly, while SEBI’s regulatory oversight of AI usage by REs is crucial for protecting investor interests, it is equally important to establish an internal management body to oversee the adoption and implementation of AI within these entities. SEBI could draw insights from the International Organization of Securities Commission’s (‘IOSCO’) final report on AI and machine learning in market intermediaries and asset management. The report recommends that regulated entities designate senior management to oversee AI/ML development, deployment, monitoring, and controls. It also advocates for a documented governance framework with clear accountability and assigning a qualified senior individual or team to approve initial deployments and major updates, potentially aligning this role with existing technology or data oversight.

    c. Thirdly, SEBI has entirely placed the responsibility for AI and machine learning usage on REs, neglecting to define the accountability of external stakeholders or third-party providers. REs significantly rely on third parties for AI/ML technologies to ensure smooth operations. Hence, it is vital to clearly outline the responsibilities of these third parties within the AI value chain. 

    d. Fourthly, the Asia Securities Industry & Financial Markets Association (‘ASIFMA’) raised a concern that financial institutions should not be held responsible for client decisions based on AI-generated outputs. It contends that it would be unjustified to hold institutions liable when an AI tool provides precise information, but the client subsequently makes an independent decision. This viewpoint goes against SEBI’s proposed amendments which seemingly endorses broader institutional liability.  

    e. Lastly, SEBI’s proposed amendments and existing regulations remain silent on the standards or requirements for the data sets (input data) utilized by AI/ML systems to carry out their functions. While the amendments imply that REs must ensure AI models are trained using data sets that either do not require consent (e.g., publicly available data) or have obtained appropriate consent, particularly under the Digital Personal Data Protection Act, 2023 (DPDPA), SEBI could have more explicitly define the standards for high-quality data sets suitable for AI/ML functionality particularly crucial when the data protection rules have not seen the light of the day.

    CONCLUSION

    While it is commendable that SEBI, recognizing the growing use of AI/ML tools in the financial sector, proposed amendments to hold REs accountable for their usage, it should have given due consideration to the factors mentioned above. Because it is vital to ensure that any policy introduced is crafted carefully in a way that does not, in any way, discourage innovation and growth in the emerging fields of AI and ML technology. 

  • Aligning RBI Directives with DPDP Act in the Banking Sector

    Aligning RBI Directives with DPDP Act in the Banking Sector

    BY VISHWAROOP CHATTERJEE AND NACHIKETA NARAIN, SECOND-YEAR STUDENTS AT RGNUL, PATIALA.

    Introduction 

    Failure To Comply with Data Protection Protocols: The Kotak Mahindra Bank Incident

    Data Privacy in Banking Sector

    Analysis and Suggestions to the DPDP Act

    Conclusion