What Tesla’s data breach reveals about consumer privacy grey areas

Written by Will Novosedlik

Earlier this month, Reuters reported that between 2019 and 2022, groups of Tesla employees privately shared, via an internal messaging system, the sometimes highly invasive videos and images recorded by customers’ car cameras.

Tesla vehicles have nine on-board cameras – eight on the outside, and one on the inside which is mounted just above the rear-view mirror. They are used for a variety of driveless functions such as autopilot and autopark, and dashcam capabilities.

The cameras, along with several radar and sonar sensors, are there primarily to aid in the development of self-driving technology. Since the AI that drives autopilot is still in development, there is a mountain of a data that needs to be collected for training purposes.

Data is not gathered without the customer’s consent. Even so, with a global fleet of several million vehicles, that’s a lot of data. It should come as no surprise that some of it is personal, even intimate, and that a customer would more than likely prefer for it to remain private.

The Reuters story revealed that the images were being shared by “data labelers” – employees whose job it is to label objects caught on camera as a way of training autopilot to recognize things such as fire hydrants, stop lights, bikes or people. This task requires hundreds of employees poring over hours and hours of recordings, day in and day out.

It’s a monotonous job – and apparently many of the culprits cited that as an excuse for sharing the imagery. They were just bored. Still, other employees pointed out that customers had consented to the company’s use of the data to improve the product experience, and believed their actions were warranted on internal servers. But in this day and age, we all know that it only takes a click of a button to launch content from internal to external.

So what do the experts have to say about this issue?

According to Chris Piche, CEO of Smarter AI, a leader in AI cameras and enablement software, data capabilities must be deployed not only effectively, but safely. “It must be ensured that cameras are effective in capturing and analyzing data, and that the data is handled responsibly and in compliance with data protection regulations,” he says. “Tesla’s breach highlights the risks and vulnerabilities that exist in the storage and processing of camera data, which has negative consequences for customer trust and the reputation of our industry as a whole.”

But is keeping customer data private really realistic? We asked Shane Saunderson, founder and CEO of AI consulting outfit Artificial Futures for his opinion. “There are certain people that just need access to this data,” he says. “And yes, there will be some things that get caught up in that huge data lake that are unnecessary or inappropriate. A car taking an image as it drives down the road so that you can do better AI training is different from a car parked in front of a bedroom window watching people have sex. But the technology doesn’t know the difference. And so ultimately, that’ll come down to a human who looks at it and recognizes that, oops, there are naked people in this. And at that point, is that person a good human who’s just going to push the delete button, or a terrible human who’s going to do something nasty with it?”

When asked what Tesla could do to protect customer data, Piche responded, “They could secure data storage with encryption and other security measures; restrict access to only authorized personnel with a legitimate purpose; and minimize data collection and storage by collecting only necessary data and deleting data that is no longer needed.

It’s hard to imagine that the data isn’t already encrypted. And there are other grey areas here too, such as the parameters that dictate what a “legitimate purpose” entails, determining how AI knows what data to collect and aiming to find a way to avoid the “dragnet” approach to data gathering.

According to Laura Mingail, founder of content shop Archetypes & Effects, this is a much bigger story than Tesla. “While advances in AI are reducing the need for human involvement at Tesla, we should be shifting this conversation to highlight and address consumer privacy protection needs more broadly,” she says. “As users of many digital platforms, we consent to everything from our location data to facial captures to be able to use AR-enabled filters. As advertisers, we effectively invest in the development of these platforms and their innovative audience-engaging features. To protect consumer privacy, companies and consumers need stronger awareness, guidelines and enforceable penalties for breaches. As marketers, we can help by ensuring that the data being collected from consumers is clear for them, and that what’s being collected is only what is necessary.”

She goes on to say that includes data protected by platforms that we are advertising with. “The onus is partially on marketers to spend on platforms, or even partner with other brands, that are appropriately protecting consumers’ data. In the case of video footage, for example, advances in AI can allow for recorded footage to identify only what is needed, such as a human being present but not having to record actual faces.”