Navigating privacy in the age of facial recognition

Amid existing privacy concerns, brands must learn to responsibly use new forms of biometric data.
007_DATA-TECHNOLOGY_City-data-points

On July 31, the federal government launched a “COVID Alert” app in Ontario with the goal of mitigating the spread of the coronavirus in Canada by notifying users when they have come into close contact with another user who has tested positive for COVID-19.

A big part of the messaging around the app was around how little data it collected; it uses Bluetooth connections in order to determine contact, and it is clear when first activating the app that it doesn’t use GPS or collect user data.

It’s not the only time privacy has come into focus since the pandemic began.

In June, the Privacy Commissioner of Canada and similar provincial agencies in Quebec, B.C. and Alberta launched a joint investigation into Tim Hortons’ new mobile ordering app, following media reports that the company was collecting and using data to trace the movements of those who had downloaded it. Meanwhile, the unintentional release of information remains a top concern for many people, with Instacart, Wattpad and the Heart and Stroke Foundation being among those appearing to have been impacted by breaches in recent weeks.

While concerns continue to abound around digital privacy, a new Fjord Trends report by Accenture Interactive notes the collection and storage of online activity is but the tip of the data iceberg. We now live in an age where “our physical behaviour is also generating trackable data, connecting us to the wider digital ecosystem that monitors our streets,” and brands will be called on to address the concerns these advancements are likely to cause.

Already, facial and body recognition technology is being used by brands to “enable seamless interactions such as unlocking things, personalized curation of messages and content, and paying for purchases,” Accenture writes.

In China, Alipay (the financial arm of ecommerce company Alibaba) is using “Smile to Pay” technology. FaceQuote, a Zurich-based insurance company, provides life insurance quotes in exchange for a selfie. And Disney has piloted an interactive movie poster that used AI-based photography and emotion recognition to display different versions of a Dumbo movie poster that matched the expression on the face of the person looking at it.

Inevitably, the use of new facial and body-recognition has been met with considerable backlash from consumers, the report notes. For example, video platform Vimeo is facing a lawsuit in the U.S. over having allegedly collected and stored thousands of people’s facial biometrics without their knowledge. Meanwhile, IMB recently ended its facial recognition program out of fears over its susceptibility to bias and potential for misuse, as part of a pledge to address bias and racial inequality.

Here in Canada, U.S.-based technology firm Clearview AI was making its facial recognition software – based on billions of photos pulled from online profiles – available to law enforcement, although it has since been forced to pull its tech out of Canada.

“In today’s post-Cambridge Analytica era, public backlash is a serious risk,” notes Accenture. “It’s essential to learn from mistakes made in the digital environment when developing new products and services. In particular, privacy concerns and consent must be addressed more seriously as, with biometric data, any hack or security breach risks permanently compromising the individual – you can change a password, but not your fingerprint.”

In addition to thinking through the ethical minefields that these new technologies present, the consultancy recommends that organizations consider new services that could be unlocked by facial recognition. “Look at the human experience of these services: Who do they most convenience? How do people consent? How could improved communications between machines – via 5G – create new service opportunities?”

Furthermore, it suggests focusing on educating customers about data consent and privacy, since the consequences of a biometric data breach is much more severe than other kinds.

Companies must ensure they make “the invisible visible’ so that people understand when a scan, transaction or consent has taken place, it says, adding that screen-based interactions are likely to decrease as automated systems begin accessing data to anticipate customers’ behaviours. “Ensure that people can be the curators of their own personalized experiences – build a platform for people to express, discover, and receive what they want – subject to privacy laws like GDPR.”