top of page

AI and Law: Liabilities in Artificial Intelligence with Special Reference to AI Chatbots and Virtual Assistance

Paper Details 

Paper Code: RP-VBCL-18-2025

Category: Research Paper

Date of Publication: April 20, 2025

Citation: Ms. A Runcie Mathew & Mr. Arunbaby Stephen, “AI and Law: Liabilities in Artificial Intelligence with Special Reference to AI Chatbots and Virtual Assistance", 2, AIJVBCL, 260, 260-272 (2025).

Author Details: Dr. Ashwini P, Assistant Professor, Shri Dharmastala Manjunatheshwara Law College, Centre for Post Graduate Studies and Research in Law, Mangaluru, Karnataka.

Mr. Mahan, Student, Third Year B.A.LL.B., Shri Dharmastala Manjunatheshwara Law College, Centre for Post Graduate Studies and Research in Law, Mangaluru, Karnataka.




 ABSTRACT

Artificial Intelligence (AI) has become an integral part of our daily lives, making tasks easier and more efficient. From predictive text that improves our typing to virtual assistants like Siri and Alexa that help us stay organized and access information, AI seamlessly integrates into our routines. In healthcare, AI-powered diagnostic tools assist in identifying diseases, revolutionizing medical assessments. Additionally, AI chatbots have transformed customer service and legal consultation, offering instant responses and structured communication. As AI continues to evolve, it has become a trusted companion in both personal and professional spaces, reshaping the way we live and work. However, as we increasingly rely on these technologies, critical legal and ethical concerns emerge. Who is responsible if an AI chatbot provides misleadingadvice? What happens when an AI  system misinterprets  data? How do we ensure data privacy when AI-driven virtual assistants handle sensitive information.?This research paper explores the legal liabilities surrounding AI chatbots, virtual assistants focusing on issues of misinformation, privacy breaches, and accountability. It delves into the challenges of assigning responsibility among developers, service providers, and users while analyzing how existing legal frameworks adapt to AI’s growing influence. As AI continues to shape industries, governments worldwide, including India, are working on regulations to address these concerns. This study sets out to assess current legal stand and gives recommendations to ensure that AI remains a tool for progress while safeguarding legal implications. By balancing innovation with accountability, we can create a future where AI enhances our lives without compromising our rights.

Keywords: Artificial Intelligence (AI), IPR, AI Chatbots, Virtual Assistantce, Electronic Personhood, Liabilities.


INTRODUCTION

Not long ago, seeking legal advice meant consulting a lawyer in person, shifting through dense legal texts, or navigating complex judicial systems. Today, a simple chat with an AI-powered assistant can provide instant legal insights at the click of a button. AI chatbots and virtual assistants have transformed the way individuals and businesses approach legal questionsoffering efficiency, accessibility, and round-the-clock assistance.[1]

For centuries, human creativity has been at the heart of innovation, driving progress in science, technology, and the arts. Every invention, every breakthrough, has been attributed to a human minduntil now. With artificial intelligence (AI) advancing at an unprecedented pace, the very definition of creativity and intellectual ownership is being challenged. AI is no longer just a tool for assisting humans; in some cases, it is creating entirely new ideas, designs, and solutions on its own. This shift has ignited a global debate about whether AI can be recognized as an inventor and whether existing legal frameworks are equipped to handle this new reality.

The legal controversy surrounding AI and intellectual property rights came into sharp focus with the case of DABUS[2] ("Device for the Autonomous Bootstrapping of Unified Sentience"), an AI system developed by Dr. Stephen Thaler. Unlike traditional software programmed to follow predefined instructions, DABUS independently generated innovative ideas, leading its creator to file patent applications in multiple jurisdictions, including the United States, the United Kingdom, Australia, and South Africa arguing that AI should be granted inventorship status.[3]

Beyond the legal battles, the implications of AI-generated intellectual property extend far beyond patents. The rise of autonomous creativity challenges traditional notions of ownership, accountability, and reward. Unlike human inventors and artists, AI systems do not possess intent, consciousness, or personal motivations. yet they are capable of producing original ideas, artworks, and even problem-solving solutions that rival human ingenuity. This unprecedented shift challenges the very foundation of intellectual property laws, which have historically been designed to recognize and reward human effort. If AI continues to break new ground in innovation, legal frameworks must evolve not only to define ownership but also to address the ethical and economic consequences of AI-generated works. The future of law must strike a delicate balance between fostering technological advancement and ensuring that intellectual property rights remain fair, transparent, and adaptable in an era of machine-driven creativity.


DEFINITION

John McCarthy, an emeritus Stanford Professor in 1955 was first to introduce the term "Artificial Intelligence" (AI)[4], marking the beginning of a technological revolution aimed at creating machines that can replicate human intelligence. He defined it as “the science and engineering of making intelligent machines”. Despite its growing impact, there is no universally accepted legal definition of AI. Over time, AI has evolved from basic automation to complex systems capable of making independent decisions, analyzing vast datasets, and even engaging in creative processes.

According to Cambridge Dictionary"the use or study of computer systems or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them"[5]

The Oxford English Dictionary defines AI as "the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."[6]

The field of AI is very vast. It is broadly categorized into four different approaches, each reflecting a unique aspect of intelligence[7]:

Ø  Thinking Humanly: “The exciting new effort to make computers think… machines with minds, in the full and literal sense”[8]

Ø  Thinking Rationally: “The study of the computations that make it possible to perceive, reason, and act”[9]

Ø  Acting Humanly: “The study of how to make computers do things at which, at the moment, people are better”[10]

Ø  Acting Rationally: “AI is concerned with intelligent behaviour in artifacts”[11]

Building on these definitions, AI can be redefined in a modern context as a dynamic and evolving discipline that enables machines to replicate, augment, and even surpass human cognitive abilities. It is no longer limited to predefined instructions but has grown into a self-improving system capable of learning, adapting, and innovating. AI today is not just a tool; it is an entity that interacts, analyzes, and predicts, reshaping industries and redefining human-machine collaboration.


INTELLECTUAL PROPERTY RIGHTS AND ITS TYPES

Artificial Intelligence (AI) is transforming the landscape of Intellectual Property Rights (IPR). Existing IP laws, such as copyright, patents, and trademarks, were designed with human creators in mind, making it essential to adapt legal frameworks to address AI’s role in innovation. Balancing protection for AI-driven creations while fostering technological progress is need of the hour.

        i.            Copyright – Copyright protects the creative expressions of authors, artists, and producers, covering literary works, music, films, software, and even architectural designs.[12] It grants creators the right to control how their work is copied, shared, or modified, ensuring they receive credit and compensation for their efforts. By safeguarding originality, copyright law in India i.e., The Copyright Act, 1957[13] which encourage artistic and intellectual innovation with protection.

      ii.            Patent – A patent is a powerful tool that grants inventors exclusive rights to their inventions, whether they are ground-breaking products, processes, or technological improvements. This legal protection prevents others from using, selling, or replicating the patented invention without permission. To obtain a patent, inventors must disclose the technical aspects of their innovation. The Patents Act, 1970[14] provides this protection, fostering a culture of scientific and technological progress.

    iii.            Industrial Design – Industrial design rights protect the aesthetic aspects of a product; its shape, pattern, configuration, or colour arrangements that make it visually appealing. Whether it's a sleek car model, stylish furniture, or an innovative gadget, design protection ensures that the creator’s unique visual identity remains safeguarded. The Designs Act, 2000[15] aligns India’s design regulations with international standards, promoting creative industries.

    iv.            Trademark – A trademark is a brand’s unique identity, distinguishing a company’s goods or services from those of others. It could be a logo, slogan, symbol, or even a distinctive colour or sound associated with a business. Think of the golden arches of McDonald’s or the Nike, these trademarks not only build brand recognition but also foster consumer trust. By securing legal rights under trademark laws, businesses can prevent unauthorized use of their branding, ensuring authenticity and market value.Trade Marks Act, 1999[16]  is the primary legislation governing trademarks in India.

 

LEGAL FRAMEWORKS GOVERNING AI LIABILITY

AI chatbots and virtual assistants are changing the way we interact with technology. However, when something goes wrong the accountability regarding its misinformation, bias, or a security breach is widely debatable. Existing legal frameworks struggle to address AI liability, as traditional laws were designed for human decision-makers, not autonomous systems.

The EU AI Act classifies AI systems based on risk levels, imposing stricter requirements on high-risk AI applications, including legal and financial AI chatbots. The AI Act (Regulation (EU) 2024/1689)[17] also marks a historic step in regulating artificial intelligence, making Europe the first region to introduce a comprehensive legal framework for AI governance. Unlike previous fragmented approaches, the AI Act directly addresses AI-related risks while promoting trust, innovation, and ethical AI development.

The United States relies on a sector-specific approach, with agencies like the Federal Trade Commission (FTC) overseeing AI’s compliance with consumer protection laws. The United States regulates AI through a sector-specific approach, with different agencies overseeing AI applications within their domains. The Federal Trade Commission (FTC) ensures AI compliance with consumer protection laws, emphasizing fairness, transparency, and non-discrimination in AI-driven decision-making. The Food and Drug Administration (FDA) regulates AI in healthcare, while the National Highway Traffic Safety Administration (NHTSA) monitors AI in self-driving cars. Similarly, the Securities and Exchange Commission (SEC) oversees AI in financial markets to prevent fraud.[18]

India is still developing its AI legal policy, with guidelines focusing on AI ethics, privacy, and liability. The NITI Aayog report[19], delves into the application of responsible AI principles, specifically focusing on Facial Recognition Technology (FRT) in India. It highlights the technology’s applications in security and authentication while addressing concerns such as privacy, data protection, and potential biases. The report examines global AI governance models and emphasizes the need for a structured regulatory framework in India to ensure responsible AI deployment. It serves as a guide for balancing innovation with ethical considerations.


LIABILITY AND ACCOUNTABILITY IN AI CHATBOTS AND VIRTUAL ASSISTANTS

AI assistants like ChatGPT, Google Bard, and Apple's Siri and others offer instant responses. The increasing reliance on AI-driven chatbots and virtual assistants raises critical questions about accountability. While chatbots enhance user experience and automate tasks, their errors can have legal and ethical consequences. A notable case involving Air Canada[20] demonstrated corporate liability when its chatbot misinformed a customer, leading to legal repercussions. Canada’s largest airline, Air Canada, recently faced legal consequences due to misleading information provided by its chatbot. The AI-powered assistant gave incorrect advice to a customer regarding plane tickets, ultimately leading to a dispute. As a result, the Civil Resolution Tribunal (CRT) in British Columbia ruled that Air Canada was responsible and ordered the airline to compensate the affected customer. While this was a small claims case, it underscores a larger issue the accountability of companies for the digital tools they use to engage with customers. This case serves as a reminder that businesses must ensure their AI-driven systems provide accurate and reliable information to avoid legal and reputational risks.[21]

A New York lawyer used ChatGPT to generate case citations, only to discover that the chatbot had fabricated them, leading to legal penalties.[22] Although chatbots simulate human conversation using natural language processing (NLP), they lack independent decision-making abilities. Responsibility for chatbot actions lies with developers, who must validate algorithms to ensure accuracy, fairness, and compliance with laws. Secure development practices, such as following the software development lifecycle (SDLC), are crucial in mitigating risks like security vulnerabilities and misinformation.[23]

AI-driven contract generation is on the rise, but the responsibility for errors or ambiguous clauses in AI-drafted contracts is now a legal challenge. Courts in the UK have already encountered cases where AI-generated agreements have led to disputes. In India, where legal documentation plays a crucial role in business and governance, similar concerns are emerging.With the rise of smart contracts powered by AI and blockchain, legal agreements can now be automated, reducing human intervention. However, problems arise when these contracts self-execute incorrectly due to programming flaws, leading to unintended legal and financial consequences.

AI chatbots process vast amounts of personal data, making them vulnerable to cyberattacks and data breaches. For instance, in March 2023, a ChatGPT bug led to the exposure of user chat histories, raising serious concerns about AI security[24].The responsibility for an AI assistant accidentally exposing sensitive user information typically falls on the company or organization that developed or deployed the AI system, as they are accountable for ensuring data security and compliance with privacy laws. In India, where data protection laws are evolving, such incidents highlight the urgent need for stronger regulations and accountability measures.

AI systems are expected to comply with global privacy laws like the General Data Protection Regulation (GDPR)[25] in the EU and the California Consumer Privacy Act (CCPA) in the U.S. However, enforcing accountability becomes complex when AI models are trained on vast public datasets. With India working towards implementing its own Digital Personal Data Protection Act of 2023[26], ensuring AI-driven platforms are secure and responsible remains a crucial challenge.

AI chatbots can be misused for cybercrimes, such as generating deepfake videos, crafting phishing emails, and automating scams. These threats are particularly concerning as AI technology becomes more advanced and accessible. For example, AI-generated deepfake videos have already been used in fraud cases, tricking companies into making million-dollar transactions[27]. In India, where digital transactions are rapidly growing, such scams pose a significant risk to businesses and individuals.To address these challenges, global regulations are evolving. The UK Online Safety Act 2023 seeks to hold AI platforms accountable for misinformation and harmful content. In India, while laws like the Information Technology Act, 2000 and proposed amendments aim to regulate digital crimes, there is a growing need for AI-specific legal frameworks to combat emerging threats.

 

CASE LAWS

In Anuradha Bhasin v. Union of India[28], the Supreme Court of India dealt with the concerns regarding internet shutdowns in Jammu and Kashmir. It was observed by the court that any kind of use of AI in content moderation would result in different types of biases and discrimination and there has to be proper systems to prevent the same in their respective domains.     

In Gaurav Bhatia v. Union of India[29], it was held by the courts that the inventions made through AI can be patented if they are able to meet the requirements of novelty, application in industry and non-obvious nature of the invention as per the conditions made available under the Patent Act.[30]

In Infopaq International A/S v Danske DagbaldesForening[31], the European Union court mentioned about the "author's own intellectual creation" while dealing with the validity of original works which are eligible for copyright. Even though Al has gained a lot of importance, it is not completely accepted as an original work since there is no involvement of human beings.

Similarly, in Acohs Pry Lid. y Ucorp Pry Ltd[32], the same issue came before the Australian court where the question of granting copyright to an Al generated content came up.The court held that copyright cannot be granted for any other work produced by any instrument unless and until there is some kind of involvement of humans. Such case laws mention the significance of products being created by humans and any kind of recognition concerning TP can be given only when there is human invention. Addition to such findings, it has also been stated from a different perspective that the works which are generated by AI should not be owned by any person or organization and it has to be considered free.[33]

In Eastern Book Company v D B Modak[34], the Indian courts adopted the "modicum of creativity" test, establishing that a certain minimal level of creativity is required for copyright protection. Applying this doctrine to artificial intelligence (AI), it can be argued that AI systems are capable of achieving this minimum threshold of creativity. Consequently, works generated by AI may meet the originality test necessary for copyright protection.

However, in Rupendra Kashyap v Jiwan Publishing House Pvt Ltd[35], the Delhi High Court took a more traditional stance regarding copyright claims. The Central Board of Secondary Education (CBSE) asserted copyright over question papers. The Court ruled that CBSE, being an artificial entity, could not claim authorship without demonstrating individual human involvement in the creation of the content. This decision highlights the principle that, under Indian copyright law, authorship can only be attributed to a natural person.

In MySpace Inc v. Super Cassettes Industries Ltd[36], there was a discussion on the manner in which AI tools were utilized to enhance the existing system and provide proper remedy based on the nature of the case. Here the Al generated algorithm was used in order to identify the contents which were copyrighted and eliminate the same from the social media platforms. It was held by the court that such use of Al tools cannot be considered as a violation or infringement under the Act as there is no reproduction of the copyrighted material and hence it falls under the legal sphere of those which can be accepted under copyright law.

In South Asia FM Limited v. Union of India[37], it was held that a song created using an Al system cannot be considered for copyright protection as there was no involvement of humans and no creativity was involved which is one of the essential elements of providing the protection. On similar grounds, it was held in another case that whatever inventions are created using computer programs or software cannot be considered as an invention under IP as it is something which is something that cannot be invented by a person. Such cases show the need to have particular rules and regulations for maintaining the creation processes and to have clarity with regard to the legal frameworks existing with regard to the same. In general Al generated inventions are not accepted as an invention under IP even though people are highly influenced by the features available in the system along with the simpler mechanisms involved in it to make the tasks easier.

In M/S Kibow Biotech v. M/S Registrar of TradeMarks[38], it was held that AI cannot be considered as a proprietor or owner of a trademark as per the application of Trademarks Act, 1999 in India. As per the laws provided under the Act, it has been stated that only one person can be made eligible for registering the trademark and Al is not qualified under it to register the trademark.

Similarly, in Dr. Alaka Sharma v. Union of India[39], the court discussed the trademark to be granted to an AI generated painting. It was held that such a painting does not fulfill the eligibility required to be considered as a trademark as it does not have a distinctness and unique feature of its own. The initial steps to be taken to incorporate trademarks which are generated using Al is to introduce legal frameworks which can regulate such changes, developing separate systems for addressing issues concerning ownership, and enhancing the examination of trademark.


RECOMMENDATIONS

To safeguard human interests while fostering AI innovation, India must enact a "Responsible AI and Legal Accountability Act". This legislation should encompass:

1)   Definition of AI-Generated Works; The Act must define AI-generated content within the framework of the Indian Copyright Act, 1957 and the Patents Act, 1970, ensuring that ownership of AI-driven innovations is clearly attributed to programmers, users, or entities deploying AI technologies.

2)   Liability and Accountability Mechanism; A clear liability framework should be established, classifying responsibility based on the level of human control over AI. Developers, deployers, and users should be held accountable for harm caused by AI, preventing regulatory loopholes.

3)   Sector-Specific AI Regulation; Given AI’s growing role in critical areas like healthcare, law enforcement, and finance, sectoral AI regulations should be introduced to ensure compliance with existing laws, ethical AI practices, and consumer protection standards.

4)   AI Licensing and Compliance Framework; Inspired by India’s Data Protection Laws, AI development should adhere to safety guidelines. Developers must ensure AI models do not propagate biases, misinformation, or discriminatory practices.

5)   AI as a Legal Entity (Conditional Status); While AI cannot be granted full legal personhood, India can introduce a conditional legal identity for AI in specific cases, allowing AI-driven systems to enter contracts or own copyrights under a designated human supervisor.

6)   In matters of Jurisdiction and Cause of Action,if an AI-related dispute arises in India, Indian courts must have the jurisdiction to hear the case, regardless of whether the AI developer or its parent company is based in India. If an AI system causes harm or operates unlawfully in India, the cause of action should be deemed to have arisen within India, making the legal proceedings valid under Indian law.

7)   Accountability of Foreign AI Developers; If an AI system is developed by a company that does not have a physical presence in India, strict penalties must be imposed. This could include banning the AI system from operating within Indian territory or ordering its destruction, ensuring that foreign companies cannot evade legal consequences for AI-related harm.

8)   In AI Chatbots and Virtual Assistants concerns around misinformation, privacy violations, and unethical AI behaviour, certain chatbots that pose a threat to national security, public safety, or user privacy should be banned from operating in India.


CONCLUSION

As AI chatbots and virtual assistants become an integral part of our daily lives, the question of liability and accountability cannot be ignored. These intelligent systems are no longer simple tools that merely follow pre-programmed instructions. They now learn, adapt, and even make autonomous decisions, sometimes with unintended consequences. From spreading misinformation to making financial errors or even causing harm through biased decision-making, AI systems pose legal and ethical challenges that existing laws struggle to address.

The current legal framework primarily holds developers or deploying organizations accountable for AI-related harm. However, this approach fails to account for AI’s evolving nature, where decisions are often made beyond direct human control. A more nuanced liability model is needed—one that balances innovation with accountability. Going forward, India should adopt a hybrid approach to AI liability as dealt in the recommendations above.

Ultimately, AI should be a tool for empowerment, not a source of unchecked risk. By establishing clear legal, ethical, and accountability measures, India can ensure that AI-driven innovations enhance human progress while minimizing harm. The future of AI lies not in replacing human responsibility but in redefining it for a rapidly evolving technological landscape.

 

 

*Assistant Professor, Shri Dharmastala Manjunatheshwara Law College, Centre for Post Graduate Studies and Research in Law, Mangaluru, Karnataka.

**Student, Third Year B.A.LL.B., Shri Dharmastala Manjunatheshwara Law College, Centre for Post Graduate Studies and Research in Law, Mangaluru, Karnataka.

[1]World Intellectual Property Organization (WIPO), ‘Artificial Intelligence and Intellectual Property Policy’ (2021) https://www.wipo.int/about-ip/en/frontier_technologies/, accessed 4 February 2025.

[2]Thaler v Comptroller-General of Patents, Designs and Trade Marks [2021] EWCA Civ 1374.

[3]Rahul Kanna and Pallavi Singh, 'Indian Journal of Artificial Intelligence and Law' (2022) 2(2) IJAIL https://www.isail.in/_files/ugd/f0525d_be0523a94c0d4a48aed2933f1ec96b9e.pdf, accessed 4 February 2025.

[4]Christopher Manning, ‘AI Definitions’ (Stanford University, September 2020) https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf,  accessed 4 February 2025.

[5] Cambridge Dictionary, ‘Artificial Intelligence’ (Cambridge University Press) https://dictionary.cambridge.org/dictionary/english/artificial-intelligence, accessed 4 February 2025.

[6]Oxford Reference, ‘Artificial Intelligence’ (Oxford University Press) https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095426960, accessed 4 February 2025.

[7]Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edn, Pearson 2020) https://people.engr.tamu.edu/guni/csce421/files/AI_Russell_Norvig.pdf, accessed 4 February 2025.

[8] John Haugeland, Artificial Intelligence: The Very Idea (MIT Press 1985).

[9]Patrick Henry Winston, Artificial Intelligence (3rd edn, Addison-Wesley 1992).

[10]Elaine Rich and Kevin Knight, Artificial Intelligence (2nd edn, McGraw-Hill 1991).

[11]Nils J Nilsson, Artificial Intelligence: A New Synthesis (Morgan Kaufmann 1998).

[12] US Copyright Office, ‘Copyright for Writers’ (Copyright.gov) https://www.copyright.gov/engage/writers/ accessed 6 February 2025.

[13]Government of India, ‘The Copyright Rules, 1957’ (Copyright Office, India) https://www.copyright.gov.in/Documents/Copyrightrules1957.pdf accessed 6 February 2025.

[14]Government of India, ‘The Patents Act, 1970’ (as amended on 11 March 2015) https://ipindia.gov.in/writereaddata/portal/ipoact/1_31_1_patent-act-1970-11march2015.pdf accessed 6 February 2025.

[15]Government of India, ‘The Designs Act, 2000’ https://ipindia.gov.in/designs-act-2000.htm accessed 6 February 2025.

[16]Government of India, ‘The Trade Marks Act, 1999’ https://ipindia.gov.in/writereaddata/Portal/IPOAct/1_43_1_trade-marks-act.pdf, accessed 6 February 2025.

[17]Regulation (EU) 2024/1689 of the European Parliament and of the Council of 12 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts [2024] OJ L 310/1. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689,  [Accessed 4 Feb. 2025].

[18]Federal Trade Commission, ‘Comment to the US Copyright Office on Artificial Intelligence and Copyright’ (30 October 2023) https://www.ftc.gov/system/files/ftc_gov/pdf/p241200_ftc_comment_to_copyright_office.pdf, accessed 5 February 2025.

[19]NITI Aayog, Responsible AI for All: Adopting the Framework – A Use Case Approach on Facial Recognition Technology (Government of India, 2022) https://www.niti.gov.in/sites/default/files/2022-11/Ai_for_All_2022_02112022_0.pdf,  accessed 5 February 2025.

[20]Moffatt v Air Canada [2024] BCCRT 149 (CanLII) https://canlii.ca/t/k2spq accessed 5 February 2025.

[21]TheSecurityBench, 'Accountability for Chatbots' (LinkedIn, 20 December 2023) https://www.linkedin.com/pulse/accountability-chatbots-thesecuritybench-vxe5f/ accessed5 February 2025.

[22]Kevin Roose, ‘The Lawyer Who Used ChatGPT and Got Fined’ New York Times (June 2023) https://www.nytimes.com/2024/08/30/technology/ai-chatbot-chatgpt-manipulation.html accessed 10 February 2025.

[23]Ibid 21

[24] James Vincent, ‘OpenAI’s ChatGPT Bug Exposed Users’ Chat Histories’, The Verge (24 March 2023) https://www.theverge.com,  accessed 7 February 2025.

[25]‘General Data Protection Regulation (GDPR)’, https://gdpr-info.eu/, accessed 7 February 2025

[26] Government of India, ‘The Digital Personal Data Protection Act, 2023’https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf accessed 10 February 2025.

[27]Benj Edwards, ‘Deepfake Scammer Walks Off with $25 Million in First-of-Its-Kind AI Heist’ Ars Technica (6 February 2024) https://arstechnica.com/information-technology/2024/02/deepfake-scammer-walks-off-with-25-million-in-first-of-its-kind-ai-heist/, accessed 7 February 2025.

[28]Anuradha Bhasin v. Union of India (2020) 3 SCC 637.

[29] Gaurav Bhatia v. Union of India, CS(OS) 2563/2013.

[30]Aleena Maria Moncy, Protection Of AI Created Works Under IPR Regime, Its Impact And Challenges: Analysis, International Journal of Law and Social Sciences, vol 10, issue I (2024) (Alliance School of Law, Alliance University).

[31]Infopaq International A/S v Danske DagbaldesForening (C-5/08) EU:C: 2009:465 (16 July 2009).

[32]Acohs Pty Ltd. v Ucorp Pty Ltd, [2012] FCAFC 16.

[33]Ibid 32

[34]Eastern Book Company v D B Modak (2008) 1 SCC 1 (SC).

[35]Rupendra Kashyap v Jiwan Publishing House Pvt Ltd 2016 SCC OnLine Del 1951.

[36]MySpace Inc v Super Cassettes Industries Ltd [2016] DHC (FAO (OS) 540/2011, CM Appl 20174/2011, 13919 & 17996/2015).

[37]South Asia FM Limited v. Union of India, AIR2018 MAD 1839.

[38]M/S Kibow Biotech v M/S Registrar of Trade Marks [2023] DHC 137 (CA (COMM IPD-TM) 39/2022 & IA 179/2023).

[39]Dr. Alaka Sharma v. Union of India, SECOND APPEAL No. - 192 of 2007.

Related Posts

See All

תגובות


QUICK NAVIGATION

PUBLISHER CONTACT DETAILS

Prof. Raghunath K.S, 
Principal, 
Vaikunta Baliga College of Law, 
Kunjibettu, Udupi, 
Karnataka: 576102
Email: vbcllawreview@gmail.com
Mobile : 6363768001

© VBCL2025

Website created by Aequitas Victoria Foundation

bottom of page