Remember Photoshop? Deepfake is the 21st century version of Photoshop! If you have seen the video of Rashmika Mandanna entering an elevator, Tom Cruise showing off a coin trick or a video of Facebook CEO Mark Zuckerberg appearing to talk about how Facebook 'controls the future' via stolen user data, then you’ve seen a deepfake. “Deepfakes are videos or images created using Artificial Intelligence (AI)-powered deep learning software (hence the name) to show people saying and doing things that they didn’t say or do,” says cybersecurity expert Jiten Jain.
Beyond porn to politics and fraud
The first-known examples of deepfake videos in 2017 featured celebrities’ faces morphed on porn stars. Since then, the technology has often been used to influence voters, to commit cybercrimes for financial gains, siphon money from lonely men and women and to create misinformation and confusion about important issues. In Madhya Pradesh, videos have emerged that made use of clippings from popular TV show Kaun Banega Crorepati. The clippings show the Amitabh Bachchan-starrer quiz show asking questions around Madhya Pradesh politics to whip up anti-incumbency sentiments among viewers. An AI-generated image of an explosion at the US Pentagon that went viral in May caused the stock market to dip briefly. Deepfake can fuel other unethical actions like creating revenge porn, where women are disproportionately harmed. “Recently, French president Emmanuel Macron’s deepfake voice and face was used in a video where he’s shown to be singing ‘alouette’ a nursery rhyme. One could call it a funny prank or a political gesture to attack the head of the French government,” says Dorian Nadaud, French architect and UX/UI designer. Most targets of deepfake are movie stars and politicians as their video footage and audio recordings are easily available in the public domain.
So, how deep is this problem? “The technology is becoming so efficient that with just 40 high-definition photos and a one-minute video clip you can now create a sophisticated 30-second video on a simple laptop of any celebrity you want to impersonate. Even the cost of acquisition has gone down. A deep learning software which used to cost around $10 lakh is now available for as less as $5,000,” says Jain. Thanks to sites such as Facebook, Instagram and YouTube, there are plenty of images and audio for fraudsters to find. “People should be aware of the volume of data being created and shared by private and public institutions on them and the presence of CCTV in public, etc. Not that one can be completely undocumented, but one could limit the data publicly available on them,” suggests Nadaud.
Since the technology is very new, there aren’t clear rules about how deepfakes should be made and shared. It’s not yet clear, for instance, if or when viewers should be informed that they’re looking at a deepfake, or what guidelines should govern the consent process for the subject of a deepfake.
Can you spot a deepfake?
With technology getting increasingly sophisticated, detecting a deepfake is getting tougher and tougher. In 2018, researchers in the US demonstrated that deepfake faces didn’t blink like humans do, which was considered a great way to detect if images and videos were fake or not. But once the study was released, deepfake makers quickly fixed this, making it much harder to identify deepfakes. Research intended to aid in the detection of deepfakes frequently ends up improving deepfake technology.
So, how can ordinary people tell the difference between a video manipulated by AI and a normal, non-altered video? A good quality deepfake can be very difficult even for a forensic lab to detect believes Jain. But fortunately not all deepfake videos are sophisticated. Poor quality algorithmically manipulated videos can be detected through lip-syncs that do not match perfectly, shadows around the eyes or the skin tone which could seem strange. Furthermore, deepfake professionals frequently find it difficult to replicate reflections created by jewellery and glasses.
Cloned voice using deepfake is also being used to impersonate family members and plead for emergency help (often wired money). Cyber security experts suggest keeping a secret code word that every family member knows, but which criminals wouldn’t guess. If someone claiming to be your daughter, grandson or nephew calls, asking for the code word can separate real loved ones from fake ones. In a video call, if you suspect the person to be an AI-generated image, ask the other person in the video call to turn their head around and to put a hand in front of their face. Those movements can be revealing, because deepfakes often haven’t been trained to do them realistically.
However, these are just pointers intended to help guide people through deepfakes. Till legislation begins to address the threats of deepfake one has to rely on intuition for identifying what is fake and what is real. Jain suggests cross-checking content with other reliable sources to not only understand the information better, but also to find out if the information is even real. “Don’t believe everything that you see, unless it’s published and authenticated by a mainstream media channel or publication. Seeing is believing is no more true.”
Deepfake signs that you should look out for
- Pay attention to the face. Facial alterations are almost often the focus of high-end deepfake manipulations.
- Pay attention to the lip movements. Deepfakes often have bad lip-syncing.
- Pay attention to blinking. Does the person blink enough or too much?
- Observe the glasses carefully. Is there a glare? Is the glare excessive? Does the person's movement affect the glare's angle? Most deepfakes fall short of accurately capturing the physics of light.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!