Beware of voice spoofing method scams, you can imitate family voices

Clubnet Digital Clubnet Branding Identity Marketing

Jakarta, domclub Indonesia

Cybercrime
increasingly sophisticated as technology continues to develop
artificial intelligence
(AI).One thing that is increasingly widespread is AI Voice Spoofing.
AI Voice Spoofing is a scam method using AI to clone and deceive via telephone or voice messages.This technology is able to imitate a person’s voice using just a few seconds of audio samples.
ADVERTISEMENT
SCROLL TO CONTINUE WITH CONTENT
This mode is known as Vishing (Voice Phishing), the process is quite short, the scammer will call the victim, once connected they will record 3-10 seconds.The voice is then processed by AI into a digital version, after which the perpetrator will call someone close to the victim to ask for money or data.
Google previously explored the use of generative artificial intelligence (generative AI) by threat actors in phishing and information operations (IO) campaigns.
They analyzed how fraudsters use generative artificial intelligence (AI gene) to create more convincing content, such as images and videos.
Launching Google Cloud, they shared insights into how fraudsters use large language models (LLMs) to develop malware.Google emphasizes that although fraudsters are interested in generative AI, its use is still relatively limited.
Initial research reviews new artificial intelligence (AI) tactics, techniques, and procedures (TTPs), as well as emerging trends.
One of the major cases that occurred was a theft case targeting a multinational company in Hong Kong.In that case, thieves managed to steal more than HK$200 million (Rp. 430 billion) using voice cloning and deepfakes.
Google’s Mandiant Red Team then used the tactic in a simulated attack to test the organization’s resilience.
As a result, they found signs of AI-generated responses include awkward pauses before answering the phone, an overly steady tone of voice, unnatural responses, and subtle delays or distortions.Lastly, there is an inability to answer specific questions.
Some companies claim that misuse of their tools can have serious consequences.
“We recognize the potential for abuse of this powerful tool and have implemented strong security measures to prevent the creation of deepfakes and protect against voice impersonation,” a Resemble AI spokesperson said in a written statement.
NBC News
, Wednesday (19/11).
According to Sarah Myers West, co-executive director of the AI ​​Now Institute, a think tank focused on the policy consequences of AI, this technology also has great potential to cause harm.
“This can clearly be used for fraud, deception and disinformation, for example by impersonating institutional figures,” West said.
Many victims are not aware that their voices have been stolen, victims tend to easily believe familiar voices, making the perpetrators take advantage of the moment.
(wpj/dmi)
[Gambas:domclub Video]

Kamu mungkin juga menyukai: