AI is getting more sophisticated, making real-time deepfakes have entered the realistic stage

Clubnet Digital Clubnet Branding Identity Marketing

Jakarta, domclub Indonesia

Chairman of the Cyber Security and Communication Research Institute CISSReC Pratama Persadha said that the development of increasingly sophisticated artificial intelligence (AI) technology makes cyber fraud even more dangerous, one of which is the ability to
deepfakes
in real time.
He said modern GPU technology and deepfake model optimization made the process lighter, making it possible to do it with just a mid-range gaming laptop.
“The ability to perform deepfakes in real time during video calls is at a completely realistic stage. Modern GPU technology and deepfake model optimization make this process very light compared to a few years ago,” said Pratama to
domclubIndonesia.com
, Thursday (11/12).
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
“This kind of fraud no longer requires a supercomputer or enterprise-level computing rig. A mid-range gaming laptop with a GPU of 6-8 GB VRAM is enough to run realtime deepfake models with convincing quality,” he added.
He added that some cloud-based techniques minimize the need for local devices so that anyone with internet access and a credit card can run facial and voice manipulation services through an AI-as-a-Service platform.
In other words, technical barriers have dropped drastically.It is said that cybercriminals no longer need to be high-level technical experts.
Criminals only need to combine social skills, creativity and access to widely available software to execute this type of fraud.
Pratama said the development of AI technology over the last two years has fundamentally changed the cybersecurity landscape.
Today’s generative AI models are said to be increasingly efficient, their computing capabilities are increasingly affordable, and access to their software is increasingly easy.
These changes not only encourage positive innovation, but also give rise to new opportunities for cybercriminals.
One form of this is deepfake-based fraud, which is now evolving from just static video manipulation to real-time visual and audio manipulation.
“This phenomenon places society and organizations in a more complex threat situation because the boundaries between genuine and imitation interactions are increasingly difficult to recognize,” he said.
Real-time deepfake itself is very possible with the presence of various open-source and commercial AI models that support facial and voice manipulation with low latency, even just a few tens of milliseconds.
This is said to allow attackers to impersonate a boss, co-worker or relative in a live call without the need for prior recording or editing.
In the context of social attacks, this real-time capability eliminates the lag time that used to be a weakness of traditional deepfakes, so that victims feel like they are interacting with a real human in a natural way.
Cyber trends 2026
AI-based fraud is considered to be a trend in 2026 with a much higher level of automation and personalization.
Pratama said that the combination of leaked data, public profiles, and models that are able to imitate a person’s speaking style and behavior will strengthen the phenomenon of impersonation fraud, which is more difficult to detect.
“Investment and financial fraud will increasingly be packaged with deepfakes capable of impersonating celebrities, officials or public figures,” he explained.
Then, business email compromise attacks that were previously text-based will shift to business identity compromise attacks that imitate video calls from company officials.
In addition, Pratama said, AI will be used to discover and execute vulnerabilities automatically, creating a wave of attacks that combine social engineering and technical exploitation simultaneously.
According to him, regulations and digital literacy which are ammunition for responding to cyber fraud will likely continue to catch up, but the dynamics of attacks will continue to move much faster than society’s ability to adapt.
In the midst of these conditions, society and organizations are encouraged to understand that cyber threats are no longer just a matter of technical compromise.
“Digital identity is now both a target and a weapon. The ability to differentiate authentic interactions from artificial interactions will become an important competency at both the individual and institutional levels,” emphasized Pratama.
On the other hand, deepfake detection technology will also develop, but its effectiveness will be limited by the speed of evolution of AI manipulation techniques.
Furthermore, one of the most crucial parts is building a two-step verification culture, stricter anti-fraud policies, and organizational readiness to anticipate increasingly sophisticated identity attacks.
“In this way, society can enter the 2026 era with adequate readiness to face a threat ecosystem that is increasingly supported by artificial intelligence,” concluded Pratama.
(lom/fea)
[Gambas:domclub Video]

Leave a Reply

Your email address will not be published. Required fields are marked *

Kamu mungkin juga menyukai: