Vietify IT
Tư vấn miễn phí
Social Engineering in 2026: How AI Makes Fraud Almost Impossible to Detect
Tất cả bài viết

Social Engineering in 2026: How AI Makes Fraud Almost Impossible to Detect

Vietify IT Team6 phút đọc

In 2026, an attacker can clone your CEO's voice in 30 seconds from a LinkedIn video. A deepfake video call is indistinguishable from the real thing without training. Your staff need new skills to navigate this reality.


Social Engineering Has Entered the AI Era

Traditional social engineering relied on confidence tricks, urgency, and impersonation. Skilled attackers could fool people on the phone. In 2026, AI has multiplied this capability by orders of magnitude.

What an attacker can do in 2026 with $100 and publicly available AI tools:

  • Clone any voice in 30 seconds using a short audio sample from YouTube, LinkedIn, or social media. The cloned voice can say anything, in real time, indistinguishable from the original.
  • Generate realistic deepfake video of a known person (your CEO, your bank manager, a government official) saying whatever the script requires
  • Research your business completely — staff names, org structure, current clients, ongoing projects — from LinkedIn, social media, and data broker sites
  • Write perfect personalized scripts in Vietnamese or English tailored to your specific business situation
  • Automate attacks at scale — one attacker can run dozens of simultaneous social engineering campaigns

The Three AI Social Engineering Attacks Hitting Vietnamese SMBs

Attack 1: AI Voice Clone — The "CEO Call"

An accountant or finance manager receives a phone call. The caller is clearly their CEO or director — voice, speech patterns, even characteristic phrases are identical.

"Hi [Name], I'm in an important meeting in Ho Chi Minh City. I need you to process an urgent wire transfer immediately — I can't explain everything now but please trust me, this is critical. 120 million to this account by 4pm."

The call ends. The accountant, not wanting to let their boss down, processes the transfer.

The CEO was never on the call. An AI voice clone was used. The money is gone.

True story: A finance director at a Hong Kong firm transferred $25M USD after a deepfake video call in 2024. This pattern is now appearing in Vietnamese businesses.

Attack 2: Deepfake IT Support

Your staff member receives a video call from someone appearing to be from your IT provider (or Microsoft support). The face looks real. The voice is natural. They explain there's an urgent security issue and need the staff member to share their screen and install a "security update."

The update is remote access malware. The attacker now has full access to the staff member's machine and everything visible from it.

Attack 3: AI-Personalized Vishing (Voice Phishing)

An attacker calls pretending to be from your bank's fraud department:

"Mr. [Name], we've detected suspicious activity on your business account ending in 4521. Your assistant Ms. [Correct Name] attempted a transfer to [Real Supplier Name] this morning. Did you authorize this?"

The attacker used your bank's name, your account's last four digits (from a breach), your assistant's real name (from LinkedIn), and the name of a real supplier (from your company's public information). Everything sounds legitimate. The "verification" they request steals your banking credentials.


How AI Research Makes Every Attack More Convincing

Before AI, a scammer making up a pretext had to improvise. Now, AI can compile a complete dossier on your business in minutes:

From LinkedIn: Staff names, roles, photos, recent job announcements, connections From your website: Client testimonials (reveals clients), team page, recent news From Google Maps/reviews: Business hours, location, customer complaints From data brokers: Email addresses, phone numbers, family relationships, property records From previous breaches: Partial passwords, previous email addresses, internal data

All of this is synthesized into a personalized attack script that includes real details your staff can verify — making the fraud feel legitimate.


Building Defenses Against AI Social Engineering

Defense 1: Verbal Verification Codes (Highest Priority)

For any request involving money, system access, or sensitive information that arrives by phone or video call: establish a shared verbal codeword system.

Before any sensitive action is taken over the phone, the caller must provide the codeword. This defeats voice cloning because the attacker doesn't know your internal codeword.

Implementation: Choose a 4-6 word phrase. Share it only with key staff. Change it quarterly.

Defense 2: Callback Verification Protocol

Any phone request for financial action or system access triggers a callback protocol:

  1. Tell the caller you'll call them back to verify
  2. End the call
  3. Dial the person's known, verified phone number (not a number given in the call)
  4. Confirm the request

If the "CEO" is upset that you're following verification procedures, that's a red flag — and if it's the real CEO, they should be proud their team takes security seriously.

Defense 3: Multi-Person Authorization for Financial Transfers

No single employee should be able to initiate and authorize a transfer above a threshold (e.g., 20M VND) without a second person's approval. Even if one person is fooled by social engineering, the attack fails without the second authorization.

Defense 4: Staff Training on AI Threats (Quarterly)

Your staff need to know:

  1. Voice and video cloning is real and accessible in 2026
  2. "Verification" requests from callers are suspicious — you verify by calling back
  3. Any request involving urgency + secrecy is a red flag
  4. Unexpected push notifications on their phone (MFA) should be denied and reported
  5. No legitimate IT provider needs staff to install software on demand

Defense 5: Establish "Pause and Verify" Culture

The most important cultural change: urgency is a manipulation tactic. Legitimate requests can wait for verification. If someone is pressuring your staff to act immediately without verification, that pressure itself is evidence of fraud.

"I'm sorry, I need to follow our verification procedure. I'll call you back on your known number." This sentence defeats most social engineering attacks.


How Vietify IT Trains Your Team Against AI Threats

Our Social Engineering Defense Training for small teams:

ServiceDetails
AI Threat Awareness Session90-minute staff session covering voice cloning, deepfakes, and AI fraud
Verification Protocol DesignCustom verbal codeword and callback system for your business
Financial Authorization PolicyWritten dual-authorization policy for transfers above threshold
Simulated Vishing CallsRealistic test calls to assess staff response and identify gaps
Annual Refresher TrainingKeep awareness current as attacks evolve

The Uncomfortable Truth About Social Engineering

Technical security controls — firewalls, MFA, endpoint protection — are necessary but not sufficient. Social engineering bypasses them entirely by targeting the humans, not the systems.

In 2026, your people are your most important security control. A well-trained 15-person team that follows verification procedures will defeat social engineering attacks that fool companies with sophisticated technical defenses.

Book a free Social Engineering Risk Assessment with Vietify IT. We'll assess your current verification procedures, identify your highest-risk scenarios, and design a training program for your team — at no cost.

Call: 0914 985 772 | vietify.vn/contact


Vietify IT Services — Da Nang's Security Awareness Specialists. Protecting Vietnamese businesses from the human side of cybercrime.

Chia sẻ bài viết

Cần tư vấn IT cho doanh nghiệp?

Vietify IT cung cấp Managed IT từ 4.990.000đ/tháng. Phản hồi trong 30 phút.

Nhận tư vấn miễn phí

Bình luận

Đang tải bình luận…

Để lại bình luận

0/2000

Bình luận sẽ được kiểm duyệt trước khi hiển thị.

Xem tất cả bài viết

Cập nhật: 18/4/2026