Home
Sitemap
Mano VILNIUSTECH
lt
  • About us
    • About ITSC
    • News
  • IT Services for VILNIUS TECH community
    • Computer maintenance
    • E-mail
    • Wireless (Wi-Fi) network
    • Virtual private network (VPN)
    • Two factor authentication
    • Information Systems development (IT Projects)
    • Software for VILNIUS TECH Studies
    • VILNIUS TECH Printing Service
  • IT Services for Business
  • Contacts
    • Main contacts
Home
lt
The Centre of information technology and systems About us News As Deepfakes Spread, Professor Explains How to Spot Them
  • About ITSC
  • News
As Deepfakes Spread, Professor Explains How to Spot Them
2025-01-16

As Deepfakes Spread, Professor Explains How to Spot Them

Every day, we see countless photos, images, and videos online. However, amidst this constant stream of information, we often fail to notice that some of them are fake. While some are harmless, the case of a scammer using AI-generated photos of Brad Pitt to swindle over €830,000 from a French woman highlights how deepfakes can be used for serious financial fraud and other crimes.

Deepfakes Can Have Positive Applications Too


Today, deepfakes carry a largely negative connotation, but their origins were not malicious, says Prof. Dr. Artūras Serackis, head of the Department of Electronic Systems at VILNIUS TECH.

According to him, the development of such technologies stemmed from the simple desire to use AI to generate human likenesses and other images. Initially, these generated visuals were imperfect, blurry, and lacked detail. However, modern AI solutions can create convincing faces, mimic voices, insert fabricated images into videos, or even manipulate live broadcasts. While often used for humor or entertainment, deepfakes can also have beneficial applications.

“There are AI technologies that enable the creation of a digital twin, a virtual persona speaking the same words but in a different language,” explains Prof. Serackis. These solutions could find significant use in the film industry. “Imagine going to the cinema and, instead of reading subtitles or hearing dubbed voices, seeing your favorite actor speaking Lithuanian in their own voice. The effect would be incredible.”

While such a prototype isn’t yet realized, digital twins could be highly beneficial in education. AI-generated "virtual teachers" can assist educators, especially in today’s digital age.

“Students prefer courses where they can rewatch lecture recordings later. However, this can be uncomfortable for some lecturers. A ‘virtual teacher’ with the lecturer’s or another person’s face can create video content from the provided material that students can revisit multiple times. Moreover, if certain sections, formulas, or methods are unclear, AI can generate another video where the ‘teacher’ explains the material differently or in more detail,” shares Prof. Serackis.

Rising Threats

Although advanced AI tools allow for increasingly impressive and realistic fakes, they also bring more risks. While the technology itself isn’t inherently malicious, its accessibility means anyone, even those with ill intentions, can use it, warns Prof. Serackis.

Deepfakes are often used during elections to discredit politicians or harm their reputations. However, elections aren’t the area where deepfakes cause the most damage. A study by The Coalition For Women In Journalism found that over the past two years, deepfakes have been primarily used to incite bullying, spread disinformation, and create pornography.

“One example comes from the U.S., where Carmel High School students faked their principal’s image in a video where he supposedly criticized Black students, offending a large part of the community. Despite the backlash, the school’s response was inadequate, and lawsuits against the administration were considered. A similar incident occurred in another school.”

Sometimes, people’s images are used in pornographic videos, which are then distributed online. These are highly sensitive cases, as such videos are often shared through private or poorly controlled channels, says Prof. Serackis.

Recognizing Deepfakes Is Possible

Even high-quality deepfakes can be detected, especially when the person’s face moves in the video. In live broadcasts or video calls, simpler AI technologies are often used, making them easier to spot.

“If someone appears off in a video, ask them to turn their head 90 degrees or wave their hand in front of their face. If it’s a fake, the facial recognition and video generation process will falter. This is a simple yet effective tip, as such recordings can cause significant harm,” advises the professor.

He also recommends paying attention to the background in videos. Due to technological limitations, it is often plain, evenly lit, and lacks shadows. To identify fake photos, look for AI errors often related to human anatomy—for instance, an incorrect number of fingers. Other giveaways include functional anomalies or violations of physical laws, such as unrealistic reflections in mirrors. Unusual scenarios, social, or cultural anomalies can also raise suspicions—for example, Japanese individuals hugging publicly, which is uncharacteristic of their culture.

“Another interesting aspect is biometric artifacts. When generating a deepfake of a known person, certain features might be inaccurately replicated, like ear shapes. Ears are unique to each individual, like fingerprints, and can be used for identification. Additionally, AI-generated images are often overly stylized, with blurred features or exaggerated colors,” notes Prof. Serackis.

VILNIUS TECH Researchers Develop an AI Tool for Rapid Deepfake Detection

Smart technologies are being developed to detect deepfakes. Prof. Serackis and his team at VILNIUS TECH are creating an innovative AI solution to instantly identify deepfakes, particularly those generated by AI, to prevent manipulations in the information space.

“Our solution uses facial recognition technologies to analyze a vast number of facial points, tracking muscle movements, expressions, and even pupil changes to determine if these changes are generated,” explains the scientist.

However, detecting deepfakes requires longer videos where the subject’s head and facial muscles move. This limitation necessitates compromises, but Prof. Serackis assures that developing deepfake detection tools is an ongoing process requiring increasingly sophisticated solutions.

“Sometimes, AI errors are amusing and easy to spot. However, if AI struggles to generate the correct number of fingers on a hand, creating a tool to automatically detect such errors is a significant challenge. Once developed, such tools must then tackle increasingly advanced generative AI.”

"What worked six months ago may no longer work today. We constantly refine our prototype, using the latest deepfake creation tools to try and ‘fool’ it, then devise new ways to counter these tricks. We also monitor how other researchers detect deepfakes and incorporate similar solutions into our prototype. Furthermore, it’s becoming crucial to detect video edits showing individuals in places they’ve never been or suggesting they’ve visited a city when they haven’t. Addressing these challenges requires different technologies, but we’re actively working on solutions,” shares Prof. Dr. Artūras Serackis.

 
    • Page administrators:
    • Justinas Rastenis
    • Justas Jackevičius
    • Irmantas Klevinskas
    • Lina Daukšaitė
    • Ugnė Daraškevičiūtė
About us
About us
About ITSC
News
IT Services for VILNIUS TECH community
IT Services for VILNIUS TECH community
Computer maintenance
E-mail
Wireless (Wi-Fi) network
Virtual private network (VPN)
Two factor authentication
Information Systems development (IT Projects)
Software for VILNIUS TECH Studies
VILNIUS TECH Printing Service
IT Services for Business
IT Services for Business
Contacts
Contacts
Main contacts
Mano VILNIUSTECH
vilniustech.lt
  • Privacy policy
  • Contacts
  • Alumni
  • E-shop
  • Email for employees
Vilnius Gediminas Technical University
Saulėtekio al. 11, LT-10223 Vilnius
For international students: +370 5 274 5026, +370 5 274 4897,  crypt:PGEgaHJlZj0ibWFpbHRvOnRzY0B2aWxuaXVzdGVjaC5sdCIgdGl0bGU9InRzY0B2Z3R1Lmx0Ij50c2NAdmlsbml1c3RlY2gubHQ8L2E+:xx
For general information: +370 5 274 5030, crypt:PGEgaHJlZj0ibWFpbHRvOnZpbG5pdXN0ZWNoQHZpbG5pdXN0ZWNoLmx0Ij52aWxuaXVzdGVjaEB2aWxuaXVzdGVjaC5sdDwvYT4=:xx
Fax +370 5 270 0112
Legal entity code 111950243,
VAT payer code LT119502413.
crypt:PGEgaHJlZj0ibWFpbHRvOkFudGFuYXMua29udHJpbWFzQHZpbG5pdXN0ZWNoLmx0IiBzdHlsZT0icG9pbnRlci1ldmVudHM6IG5vbmU7Y29sb3I6IHJnYmEoMCwgMCwgMCwgMCk7IHBvc2l0aW9uOiBhYnNvbHV0ZTsiPkFudGFuYXMua29udHJpbWFzQHZpbG5pdXN0ZWNoLmx0PC9hPg==:xx
e-solution Mediapark
e-solution Mediapark
ATHENA