Fall of Zao Raises Questions about Personal Security

Apps available for mainstream download on platforms run by big tech corporations such as Apple, Google and Tencent must also pass through approval based on ethical privacy policies.

The idea of doctoring photographic images for our own amusement dates all the way back to the 19th century. The well known photo of Abraham Lincoln regally staring out into the distance, in fact, is actually the body of Confederate leader Jefferson Davis.

Fast forward to the 21st century, and many use software such as Photoshop to edit digital photography – so much so, that the word “photoshop” is widely used as a verb.

Now, in the era of advancement in artificial technology it has been possible for apps to edit videos of popular films and television shows, superimposing any face you want onto your favorite characters.

The logo of the Chinese app ZAO, which allows users to swap their faces with celebrities and anyone else, is seen on a mobile phone screen. [Photo/VCG]

This is something the Chinese app Zao did better than anyone else. It offered intuitive and simple, easy and deep fake videos with just one photo of the user.

Traditional deepfake techniques required a lot more data, approximately hundreds of images just to create a realistic face swap. Once the face swap was complete, however, users could superimpose themselves onto character’s faces on demand and in minutes.

A technology previously deemed to be lying dormant, in the realms of Internet subcultures and revenge pornography, has now found its way into the palatable mainstream. There were millions of downloads for the app within hours of launch, so that the Zao website crashed several times.

So far so good for the groundbreaking app, set to join the elite tier of apps such as Instagram and TikTok. However, Zao’s privacy policy soon aroused concern. The app’s terms and conditions gave developers the global right to permanently use any image created on the app, without paying the original content creator.

The terms went on to state that the developers possessed the right to transfer this authorization to any third party without further permission from the user. As a user of Zao, you will be none the wiser if your face or other personal details are used at the whim of any developer.

This was widely believed to be in breach of Chinese data protection laws.

The owner of Zao, Momo, subsequently deleted the controversial clause, seeking to offer reassurance that biometric information would not be stored or used excessively. A country relies heavily on biometric authentication for verification purposes, especially for financial matters, it should be remembered.

The widespread concerns, therefore, were not alleviated, and global social media network and payment platform WeChat soon banned users from uploading content made from Zao onto the platform, citing security risks.

Alibaba has stated that deepfake technology can be distinguished by current biometric authentication techniques. However, Zao may be the first of a long line of AI based problems for such security concerns.

Just five years ago, the level of deepface algorithms necessary to render such realism on something as visually complex and subtle as the human face would be the stuff of dreams. Now, it is a worrying reality.

Private AI research makes up a huge bulk of the investment in the technology, and often compliments technology policies of countries the world over. However, regulators must keep up with AI start-ups, the next big idea which could catch on as virally as Zao may be just around the corner, and security concerns as well as privacy must be at the forefront of their minds.

Apps available for mainstream download on platforms run by big tech corporations such as Apple, Google and Tencent must also pass through approval based on ethical privacy policies. An app with terms and conditions as lax as Zao should automatically raise red flags.

How many other apps could potentially also be recklessly handling personal information, of potentially very young and vulnerable people? Such big tech companies would do well to implement tighter scrutiny, if for no other reason than in the interests of loyal customers, who just want to have fun with new technology without being at risk of personal information being grossly mishandled.

 

Barry He is a global technology and business commentator based in London, initially specializing in start-ups and technology PR.

Opinion articles reflect the views of their authors only, not necessarily those of China Focus.