Type to search

No, Zoom Is Not Stealing Your Data. Here’s Why. Innovation

No, Zoom Is Not Stealing Your Data. Here’s Why.

Zoom

In a whirlwind week of developments for Zoom, speculation about privacy issues connected to the company’s terms of service (TOS) has sparked concerns—along with some panic—about how it uses customer data to train AI models. This echoes broader concerns about privacy and data security across the digital communication landscape. Plus it’s another instance in which questions about the handling of AI are arising as quickly as AI technology is advancing.

The breaking news here at the end of the week is that the backlash had led Zoom to change its TOS to avoid the issue of data collection for AI altogether. Let’s unpack what happened.

The level of vitriol in the Zoom example has not been trivial. Some industry leaders publicly called out Zoom for mishandling this situation, which is understandable. Zoom has been on the wrong side of data privacy guardrails before. The company, which grew at an astronomical rate during the pandemic, was found to have misrepresented the use of certain encryption protocols, which led to a settlement with the FTC in 2021. That’s the part specific to Zoom. But the company is also being condemned as one more example in the litany of bad actors in big tech, where lawsuits about and investigations into data practices are countless. It’s no surprise that the public assumes the worst, especially given its added unease about the future of AI.

Fair enough. No one put Zoom in that crossfire. Nonetheless, it’s still true that software makers must strike a delicate balance between user data protection and technological advancement. Without user data protection, any company’s reputation will be shot, and customers will leave in droves; yet without technological advancement, no company will attract new customers or keep meeting the needs of the ones it already has. So we need to examine these concerns—about Zoom and more broadly—to shed light on the nuanced provisions and safeguards that shape a platform’s data usage and its AI initiatives.

An analyst’s take on Zoom

By pure coincidence, around 20 other industry analysts and I spent three days with Zoom’s senior leadership in Silicon Valley last week. During this closed-door event, which Zoom hosts every year to get unvarnished feedback from analysts, we got an in-depth look into Zoom’s operations, from finance to product and marketing, acquisitions, AI and beyond. Much of what we learned was under NDA, but I came away with not only a positive outlook on Zoom’s future, but also a deeper respect for its leadership team and an admiration for its culture and ethos.

It’s worth noting that we had full access to the execs the whole time, without any PR people on their side trying to control the narrative. I can tell you from experience that this kind of unfettered access is rare.

You should also know that analysts are a tough crowd. When we have this kind of private access to top executives and non-public company information, we ask the toughest questions—the awkward questions—and we poke holes in the answers. I compared notes with Patrick Moorhead, CEO and principal analyst of Moor Insights & Strategy, who’s covered Zoom for years and attended many gatherings like this one. He and I couldn’t think of one analyst knowledgeable about Zoom’s leadership and operations whose opinion has soured on the company because of the furor about the TOS.

Still, we were intent on finding out more, so Moorhead and I requested a meeting with key members of Zoom’s C-suite to get a better understanding of what was going on with the TOS. We had that meeting mid-week, yet before we could even finish this analysis, our insights were supplemented by a startlingly vulnerable LinkedIn post by Zoom CEO Eric Yuan. In that post, he said Zoom would never train AI models with customers’ content without explicit consent. He pledged that Zoom would not train its AI models using customer “audio, video, chat, screen sharing, attachments and other communications like poll results, whiteboard and reactions.”

What happened with Zoom’s terms of service change?

In March 2023, Zoom updated its TOS “to be more transparent about how we use and who owns the various forms of content across our platform.” Remembering that Zoom is under FTC mandates for security disclosures, this kind of candor makes sense. Where the company went wrong was in making this change quietly and with a lack of clear delineation of how Zoom would use data to train AI.

In our discussions with Zoom this week, the company took full ownership of that lack of communication. I don’t believe that the company was trying to hide anything or get anything past users. In fact, many of the provisions in the TOS don’t currently affect the vast majority of Zoom’s customers. In being so proactive, the company inadvertently got too far ahead of itself, which caused unnecessary alarm among many customers who weren’t ever affected by the issue of AI training data.

Once the (understandable) panic began, Zoom released an updated version of its TOS, along with a blog post explaining the changes from the company’s chief product officer, Smita Hashim. Hashim clarified that Zoom is authorized to use customer content to develop value-added services, but that customers always retain ownership and control over their content. She also emphasized the wording added to the TOS: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”

The day after Zoom released its blog post explaining the TOS changes, Yuan addressed the communication failure and the company’s plans for training AI more directly and soberly. The CEO took responsibility in his LinkedIn mea culpa, saying the company had an internal process failure. The post on his personal page addressed users’ concerns, similar to Zoom’s official blog post, but Yuan emphasized the promise not to train AI with customer data with a bold statement. He wrote, “It is my fundamental belief that any company that leverages customer content to train its AI without customer consent will be out of business overnight.”

By the end of the week, Zoom cemented Yuan’s commitment not to use customer data to train AI and issued a revised TOS, effective August 11, 2023. Hashim’s blog post was also updated with an editor’s note reiterating Zoom’s AI policy. What’s more, the company made immediate changes to the product.

Will this satisfy everyone who believes that Zoom steals their information and can’t be trusted? Maybe not. Yet with all of this in mind, let’s take a clear-eyed look at the different aspects of how Zoom uses data.

How Zoom uses customer data

First, let’s distinguish between the two types of data addressed in Zoom’s TOS. Zoom can gather two categories of data: “service-generated data,” which includes telemetry, diagnostic and similar data, and “customer content,” such as audio recordings or user-generated chat transcripts.

Zoom owns the service-generated data, but the company says it is used only to improve the service. Meanwhile, video content, audio, chat and any files shared within the virtual four walls—that is, the customer content—of any Zoom meeting is entirely owned by the user. Zoom has limited rights to use that data to provide the service in the first place (as in the example that follows) or for legal, safety or security purposes.

The usage rights outlined in the TOS for meetings are used to safeguard the platform from potential copyright claims. These rights protect Zoom’s platform infrastructure and operation, allowing the company to manage and store files on its servers without infringing on content ownership.

Here’s an example: a budding music band uses the platform to play some music for friends. Zoom, just by the nature of how the service works, must upload and buffer that audio onto company servers (among other processes) to deliver that song—which is considered intellectual property—to participants on the platform. If Zoom does not have the rights to do so, that band, its future management, its record label or anyone who ever owns that IP technically could sue Zoom for possessing that IP on its servers.

This may sound like a fringe use case, and it would be unlikely to hold up in court, but it is not unheard of and would expose the company or any future company owner to legal risk.

Is Zoom using your data to train AI models?

After this week’s changes to the TOS, the answer to this question is now a resounding No. When Zoom IQ Meeting Summary and Zoom IQ Chat Compose were recently introduced on a trial basis, they used AI to elevate the Zoom experience with automated meeting summaries and AI-assisted chat composition. But as we are publishing this article on August 11, Zoom says that it no longer uses any customer data to train AI models, either its own or from third parties. However, to best understand the series of events, I’ll lay out how the company previously handled the training of models.

Account owners and administrators were given full control over enabling the AI features. How Zoom IQ handled data during the free trial was addressed transparently in this blog post, which was published well before the broader concerns around data harvesting and AI model training arose. (The post has now been updated to reflect the clarified policy on handling customer data.)

When Zoom IQ was introduced, collecting data to train Zoom’s AI models was made opt-in based on users’ and guests’ active choice. As with the recording notifications that are familiar to most users, Zoom’s process notified participants when their data was being used, and the notification had to be acknowledged for a user to proceed with their desired action. Separate from the collection of data for AI, Zoom told me this week that the product alerts users if the host has even enabled a generative AI feature such as Meeting Summary.

User data was collected to enhance the AI models’ capabilities and overall user experience. Given the latest change to the TOS, it is unclear how Zoom plans to train its AI models now that it won’t have customer data to work with.

Until this week, here is what the opt-in looked like within the Zoom product.

Previous opt-in language for Zoom IQ Meeting Summary

And here is what it looks like as of August 11, 2023.

Updated screenshot of Meeting Summary opt-in page

How account owners and administrators now enable and control the Zoom IQ for Meeting Summary feature

Zoom

Zoom’s federated AI approach integrates various AI models, including its own, alongside ones from companies such as Anthropic and OpenAI, as well as select customer models. This adaptability lets Zoom tailor AI solutions to individual business demands and user preferences—including how models are trained.

Responsible AI regulation will be a long time in the making. Legislators have admitted to being behind the curve on the rapid adoption of AI as industry pioneers such as OpenAI have called for Congress to regulate the technology. In the current period of self-regulation, the company’s AI model prioritizes safety, interpretability and steerability. It operates within established safety constraints and ethical guidelines, enabling training with well-defined parameters for decision making.

The bottom line: Zoom is using your data, but not in scary ways

Amid widespread privacy and data security concerns, I believe Zoom’s approach is rooted in user control and transparency—something reinforced by this week’s changes to the TOS. There are nuanced provisions within Zoom’s TOS that allow it to take steps that are necessary to operate the platform. This week’s events have highlighted the need for Zoom to communicate actively and publicly what I believe they are already prioritizing internally.

As technology—and AI in particular—evolves, fostering an open dialogue about data usage and privacy will be critical in preserving (or in some cases, rebuilding) trust among Zoom’s users. This week has shown that people are still very skittish about AI, and rightfully so. There are still many unknowns about AI, but Moor Insights & Strategy’s assessment is that Zoom is well positioned to securely deliver a broad set of AI solutions customized for its users. Zoom has established that it intends to do so without using customer content to train its AI models. As the company navigates data privacy concerns, I hope it can strike a balance to meet users’ concerns while advancing technology to meet their business needs.

The company admittedly had an operational misstep. Let’s not confuse that with reprehensible intent. Zoom as an organization and its CEO personally have acknowledged its customers’ concerns and made necessary adjustments to the TOS that accurately reflect Zoom’s sensible data privacy and security governance. I now look forward to seeing Zoom get back to focusing on connecting millions of people worldwide, bringing solutions to meetings, contact centers and more to make people and gatherings more meaningful and productive.