Site icon myLawrd

Scientists develop DeepFake detection using light reflections

DeepFake detection using light reflections

Scientists have developed a method to detect Deepfakes by analyzing light reflections in the eyes. With the advancement of Artificial Intelligence and photo/video morphing techniques, the nefarious actions of Deepfakes have increased. Miscreants have started creating Deepfakes for circulating misinformation and harassing/persuading/blackmailing people with obscene or compromising pictures. DeepFake detection using light reflections focuses on the cornea of eyes.

What is a Deepfake?

A Deepfake is a portmanteau of terms “deep learning” and “fake”. This technique is used to create synthetic media in which the face of a person replaced with that of other person. Deepfakes have gained the attention of the authorities across the world as its used has increased exponentially for celebrity pornographic videos, fake news, revenge porn, hoaxes and financial frauds.

Why is it important to identify the Deepfakes?

Identification of Deepfakes is necessary to determine the legitimacy of the information presented in the media. The synthetic media created through the Deepfake technique can have severe implications on the social structure. It is essential to safeguard the human rights and fundamental right of right to privacy and right to live with dignity of the individuals. Identification of the Deepfakes has become a challenge as well as crucial need for the investigating agencies and social media or other online platforms.

How this new AI tool will analyze Deepfake using light reflections?

The Computer scientists from the University of Buffalo have developed a tool to solve this issue. The AI tool identifies the genuineness of the photo by simply looking at the light reflection in the eyes.

The tool identifies the fakes by analyzing corneas. Human corneas have mirror-like surface which generates reflective patterns when illuminated by light.

In real photo the reflection in the two eyes will be similar as they are seeing the same thing. Whereas Deepfake/synthetic media synthesized by GANs fails to accurately capture this resemblance or similarity.

The tool explores/identifies such differences by mapping a face and analysing the light reflected in each eyeball. The tool provide score to the media on similarity metrics. Smaller the score, greater chance of media to be a Deepfake.

The tool is very effective for portrait images. During test, it identified 94% Deepfakes correctly.

This area requires development of more advanced methods to identify Deepfakes in landscape images and videos.

You can access the research paper here.


Do subscribe to our Telegram channel for more resources and discussions on technology law and news. To receive weekly updates, and a massive monthly roundup, don’t forget to subscribe to our Newsletter.

You can also follow us on InstagramFacebookLinkedIn, and Twitter for frequent updates and news flashes about #technologylaw.

Exit mobile version