Skip to main content

Walking a tightrope: California Invasion of Privacy Act and AI

artificial intelligence

The increasing use of AI in customer service interactions presents exciting opportunities for businesses to improve efficiency and personalize experiences.

However, this technological advancement also raises concerns about privacy and legal compliance. The California Invasion of Privacy Act (CIPA) plays a crucial role in safeguarding the privacy of conversations involving Californians. Recent lawsuits shed light on the potential pitfalls businesses can face when using AI tools that record or analyze customer interactions. In this article, we explore the key aspects of CIPA, the allegations in the recent lawsuit, and offer practical tips for businesses to navigate this evolving legal landscape.

What is CIPA?

CIPA is a state law enacted in 1967 that protects the privacy of confidential communications. Among other things, it prohibits recording or eavesdropping on conversations without obtaining prior consent from all parties involved. For decades, this statute was applied primarily to communications by telephone.

Recently, however, a handful of courts suggested that CIPA could apply to internet-based communications, such as VoIP calls, video chats, and even written communications where third parties are involved—even though the law was enacted nearly two decades before the internet even existed and even longer before it became widely adopted.

The potential penalties for violating CIPA can be significant. The law allows for both civil and criminal penalties. Individuals who win a civil lawsuit under CIPA can recover up to $5,000 per violation without actual damage. Additionally, CIPA violations can be prosecuted as misdemeanors, punishable with jail time and fines.

In recent years, CIPA lawsuits centered around the use of website cookies and other online tracking technologies. These lawsuits allege that by using such website tools to collect data on user activity without proper consent, companies are essentially eavesdropping on consumers’ online interactions and communications.

Allured by the potential for statutory penalties, enticed by the ease of bringing lawsuits against companies utilizing common analytic tools, and emboldened by open-ended rulings, the last year and a half has seen a surge in CIPA class actions and individual lawsuits. These lawsuits test the bounds of reason but nonetheless force companies to commit significant resources to defending themselves.

A New Theory of Liability Targeting AI Tools

Lawsuits in late 2023 and early 2024 allege that businesses using Google’s Contact Center AI (CCAI) violate CIPA’s wiretapping provisions. According to the lawsuits, the businesses recorded customer service calls placed to the companies without informing customers of the recording or obtaining their consent. The lawsuits allege that the businesses and Google used CCAI to not only route calls but also to listen in on and analyze these customer service interactions with customer service representatives.

According to the lawsuits, these alleged recordings and analyses of calls happened without the knowledge or consent of the customers. The complaints further contend that the recordings were used to train and improve Google’s AI models, potentially for targeted advertising or other undisclosed purposes.

What’s Next — What Can a Company Do to Reduce Risk?

As of publication, there are no decisions in these lawsuits. One case is awaiting a decision on a motion to dismiss, and the defendants in the other case have yet to respond to the Complaint. However, the plaintiffs’ bar is likely watching these “test” cases carefully.

If either lawsuit is able to survive a motion to dismiss, it would not be surprising to see a flood of similar claims over the next several months. Thus, these lawsuits serve as a warning for businesses that rely on AI-powered customer service tools. Companies should take proactive steps to ensure proper notice of third party software usage on customer-facing platforms and avoid similar legal challenges where such notices are absent or do not come early enough in the customer interaction.

The recent lawsuits further highlight the potential risks businesses face when using third-party technologies, including AI tools, and particularly those that involve recording or analyzing customer conversations. Companies contemplating using technology should ensure that they obtain clear and informed consent from customers before recording or analyzing conversations.

The lawsuits also underscore the need for businesses to understand how third-party vendors are using customer data and to ensure those practices comply with CIPA.

 

Harrison Brown Ana Tagvoryan

Harrison Brown and Ana Tagvoryan are partners in Blank Rome LLP’s Business Litigation practice group.

More Blog Posts in This Series

X
This ad will auto-close in 10 seconds