
Social engineering has entered a new era where old defenses are proving insufficient. AI-enhanced social engineering has evolved into sustained, multi-channel campaigns that exploit human psychology. Hackers are also increasingly focusing on long-term campaigns, building relationships over time through social media, phone calls, or online forums.
Organizations can no longer rely solely on traditional defenses. Instead, they must adopt forward-thinking strategies that outsmart such sophisticated campaigns.
Classic phishing still happens on a massive scale. In 2024, phishing formed one of the top three cybercrime complaints received by the FBI.
However, hackers have evolved into using other types of messaging tools, such as SMS and messaging apps.
“Smishing” occurs when hackers send fraudulent text messages, and “vishing” is using voice deepfakes in phone calls. Vishing is becoming an increasingly popular method of CEO impersonation, most often used in attempts to extort funds from an organization.
Smishing can take the form of fake delivery notices, MFA prompts, or bank alerts delivered to your phone. The United States Postal Service issued an alert informing users that scammers use fake delivery notices to extract PII (personally identifiable information) from them, such as home addresses or social security numbers.
Fake bank alerts typically lead users to a phishing website that captures credentials.
Repeated MFA alerts lead to alert fatigue, prompting users to finally approve an MFA message just to get the alerts to stop. The famous Uber hack of 2022 came down to alert fatigue.
Attackers have slipped into collaboration tools like Slack, Teams, or Zoom, using stolen credentials or invitations. They can also compromise WhatsApp accounts to gain trust. A Hong Kong-based company lost $25 million when a hacker used deepfake AI on a video call to convince an employee to make a fraudulent transfer.
Generative AI has removed many of the old “tells” on which we used to rely. Before AI, phishing emails were typically rife with spelling and grammatical errors either because the hackers were not English-speaking or simply couldn’t spell. One dubious theory was that the use of misspellings was a way to get around spam filters. However, the rise of perfectly written phishing emails, thanks to AI, debunks that theory.
While AI writing has its own “tells,” its use has become so normalized in business that the defense of, “it was written by AI” isn’t enough to mark an email as suspicious.
AI makes it trivial to carry out large-scale OSINT (open source intelligence) operations, gathering publicly available information about victims through their social media profiles or even through previous data breaches. LLMs can then be trained on the gathered intelligence to engage in a convincing conversation with a victim.
Voice cloning has also become a staple for hackers.
Some attackers no longer strike fast. Instead, they’re playing the long game by building relationships over weeks or even months.
Long-term manipulation has long been the norm in romance scams and pig butchering. However, the tactic also works for business-related hacks.
Iranian cyber threat actors are especially known for developing sophisticated, long-term social engineering campaigns when targeting victims. They construct false personas across various social media sites, build trust with their targets, and finally send a malicious link or file to try to compromise the victim’s device.
In one case, the Iranian threat actor developed a relationship over several months with an aerospace defense contractor, then delivered malware to the victim’s machine through a malicious Excel spreadsheet.
These threat actors are also known to reuse personas.
Another common tactic is for threat actors to pose as recruiters or suppliers. One hacker group known as Scattered Spider impersonates real employees and then convinces HelpDesk employees to provide the hackers with new credentials. The Scattered Spider members are known for being “patient, amiable, and armed with the right arsenal of information to impersonate their target.”
This shift from “quick hit” phishing to patient social engineering makes detection harder and defences trickier.
Security training is still important, but “don’t click the link” is outdated advice. Employees can’t realistically be expected to spot a sophisticated deepfake Zoom call or a voice clone of their CEO. Attackers only need one mistake, and humans make mistakes.
Traditional security training increasingly falls short against modern threats, especially in the face of AI. New-style attacks exploit human trust in visual and auditory cues.
To survive this new onslaught, businesses must shift to a layered approach of defense consisting of:
Technical controls remain vital, and they serve as the first line of defense against all cyberattacks, not only AI-driven ones. Whereas MFA isn’t flawless, Microsoft found that more than 99.9% of all hacked accounts don’t use MFA. The lack of MFA leaves accounts open to password-spray, rainbow tables, and brute force attacks.
FIDO2 hardware keys and biometrics might be a pain, but they go a long way to reducing the chances of account compromise.
MDR solutions and mobile protection solutions prevent the majority of automated attacks, while anomaly detection tools can detect behavioral anomalies if an attack does go through.
Without these tools in place, an organization is a sitting duck. However, more is needed to prevent falling prey to the new era of social engineering.
Company processes and policies form the second layer of defense.
Hackers know that getting around automated systems is massively difficult. Occasionally, some highly sophisticated new attack vector emerges, such as zero-click mobile malware or LLMs that generate malicious scripts on the target machine. However, the easiest way for hackers to circumvent sophisticated defenses is to target humans. Hence, social engineering.
By implementing processes for sensitive and potentially risky actions, your company significantly reduces the potential of such attacks succeeding.
For example, you might have a policy that requires multiple signatories for any large transfer of funds or for any unscheduled transfer. Out-of-band communications, where someone must confirm their identity using a different communication channel, would block all but the most sophisticated social engineering attempts.
Other processes might include mandatory delay periods for significant transfers or rigid workflows and approval steps.
The final layer of defense is to develop a company culture and organizational mindset where security practices become embedded in daily behavior. Normalizing caution means making it standard for employees to question and double-check requests without repercussions, like pausing to verify a suspicious Zoom call.
Employees should feel safe and encouraged to put the brakes on “urgent financial transfers” that haven’t followed the usual channels for approval.
Encouraging open questioning reduces hesitation in flagging anomalies.
The real battleground has shifted from spotting technical red flags to deciding whom and what to trust. As attacks blur the line between authentic and fake, organizations must assume that manipulation attempts will land, and so build systems that can withstand them.
For businesses, especially mid-market ones, the challenge is to prepare for a world where “people hacking” looks indistinguishable from real business interactions.
Efficient, modern security approaches can help reduce the strain on employees and limit the damage when (not if) attackers succeed.
That’s where SolCyber’s philosophy of simplifying security stacks and focusing on a human-led approach provides companies with a practical way to defend against today’s evolving attacks.
To learn more about SolCyber’s multi-layered, human-first defense approach, reach out to us for a chat.
Photo by GuerrillaBuzz on Unsplash

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.






