We’re Gonna Need a Bigger Boat – The Rise of AI-Enhanced Phishing Attacks


By Curt Graham

While hard at work, an unexpected email arrives with an urgent request: “This is your boss. I lost my company credit card and I need you to send me your card information immediately.” By now, most of us cannot be tricked by easy-to-spot phishing attempts like this. However, with the rise of artificial intelligence (AI), attacks are becoming more convincing and increasingly difficult to detect.

Two tools used by cybercriminals have been greatly enhanced by AI technology: spear-phishing and deepfakes. “Spear-phishing” is a targeted attack aimed at tricking victims into providing confidential information. With the assistance of AI, it has never been easier for cybercriminals to gather information about their targets online and to use that information to deploy a convincing attack, such as an email from a “colleague” asking the recipient to provide sensitive information or to click on a malicious link. Given the nearly flawless messages that can be created using AI, the longstanding suggestion to “be on the lookout for spelling and grammatical errors” is becoming less helpful by the day. Additionally, spam filters may be less effective against these sophisticated attacks, creating more opportunities for someone to fall victim.

“Deepfakes” are another commonly used AI-powered tactic. This involves using AI to manipulate audio or video recordings and then tricking people into believing that someone said something that they did not. For example, instead of an error-riddled email from your “boss,” you may receive a voicemail from someone sounding like your boss asking for your log-in credentials.

Several pieces of well-known advice continue to apply when guarding against these threats. Unsolicited and unexpected emails and requests must be viewed with skepticism and handled with caution. Attachments and links contained in emails should never be accessed until the email is verified as authentic. Organizations must also stay current on new threats and constantly educate their employees about the need for vigilance. AI tools may even become available to detect and guard against AI attacks (now that’s fighting fire with fire!).

A new era has begun for cybercriminals who now have a sophisticated tool at their disposal. Although the advancement of AI technology has made cyberattacks harder to spot, a little education and awareness goes a long way in avoiding the harm posed by this growing threat.

For more information, please contact Curt Graham at or your local FMG attorney.