While AI has reshaped the way we live, work, and connect, it has also given scammers powerful tools to deceive us. From fake voices to convincing chatbots, the scams are alarmingly real now. This guide will tell you everything you need to know to identify and defend against the most common AI scams on the Internet.
Content
- Deepfake Voice and Video Scams
- AI-Powered Phishing Attacks
- AI-Driven Mass Fake Reviews and Ratings
- AI-Generated Fake News and Propaganda
- Mass AI-Powered “Pig Butchering” Scams
Deepfake Voice and Video Scams
Using Generative Adversarial Networks (GANs) and autoencoders, AI models can learn a person’s voice patterns, facial movements, and manners to create fake videos of people. Although this has good uses in media and entertainment, scammers are using it to impersonate known figures.
The scammers can use fake videos of a popular person to endorse products/services, defame the individual, or request something urgent. Like the notorious Quantum AI scam where people used Elon Musk’s deepfakes to promote cryptocurrency and other investment platforms.
These scams aren’t limited to just popular celebrities, they can even be personalized to you. For example, what if a scammer uses your company CFO’s voice and orders a money transfer to an account on call? It would be hard to decline the order if the voice and talking style closely match your CFO.
Detect deepfake voice and video scams
Identifying deepfakes is your best bet to protect against them. Thankfully, deepfakes aren’t perfect (at least for now) so you can still detect deepfakes if you look for the clues. Below are some reliable ways:
- Analyze video/audio: deepfakes usually have unnatural facial movements, like stiff expressions with movement focused on eyes and mouth only. Mismatched lip-syncing with audio is another clear giveaway.
- Use Deepware Scanner: you can use Deepware Scanner as well which can possibly detect fake videos by analyzing each frame with multiple detection models. You just need to upload the video or paste its URL to get results. It will scan and let you know how likely it’s fake or real. It isn’t perfect, but it can detect most AI videos.
- Listen to your gut: in most cases, a deepfake attempt will be from an unexpected source. While it might seem true, it should give you a feeling that something isn’t right. For example, why would your CFO suddenly call from an unknown number asking for a big money transfer to an unknown account, or why a celebrity suddenly promote investing in something unrelated? Listen to the feeling and double-check it online or investigate the source directly.
AI-Powered Phishing Attacks
AI greatly enhances social engineering tactics, which consequentially enhances phishing attacks. With AI, phishing attacks like spear phishing can be automated at a mass scale. AI can analyze social media profiles, writing styles, and other public data to launch sophisticated attacks on targets.
Image source:
Freepik
AI can email the target impersonating someone important to them. They are also good at holding a conversation with context, allowing them to convince many people to click on a phishing link to download malware or steal credentials.
Protect against AI phishing attacks
Although many of the tactics to detect phishing attacks apply to AI attacks as well, you need to be a bit more judgmental with AI attacks. Below are some ways to avoid AI phishing attacks:
- Don’t entertain unusual requests, especially when it comes to giving information or clicking on links. Try to check with the sender using a different communication method since their account might be hacked.
- Enable Two-Factor Authentication on your accounts to keep your account secure even if the password is stolen.
- Hover over the suspecting link to preview the link address. Make sure the domain name exactly matches the name of the expected website.
- Limit the personal information you share online. AI attacks heavily depend on publically available information to personalize the attack.
- Never share any sensitive information with anyone, even if the service officials ask for it. Information like passwords, security numbers, and authentication numbers are made to be known by the account owner only.
AI-Driven Mass Fake Reviews and Ratings
Fake product reviews have always been a problem, but AI has drastically made the situation worse. Previously, you could at least differentiate between fake and real reviews, but AI human-like reviews are hard to distinguish. The situation is so bad that the Federal Trade Commission has banned fake reviews and civil penalties can be sought against violators.
Image source:
Freepik
AI can mass generate very convincing human-like reviews that can easily make a product look the best in the category. Many websites openly offer fake mass review services to either up rank businesses or down rank competitors. These reviews can easily make a bad product look good and convince the consumers to make the purchase.
Avoid fake reviews and ratings
While it’s difficult to detect a fake AI review just by looking at the review, there are still some red flags you can look for. The following detection methods may not work on all platforms, but they should help avoid bad products when combined:
- Look for a surge in reviews at a particular time, especially when the product was first listed. Fake reviews are usually bought in mass and can be easily identified.
- If many generic reviews only praise the product without talking about their personal experience with it, they might be fake.
- If possible, look at the reviewers’ profiles and see if they only rate 1 or 5 stars and use generic tones.
- Look at Purchased or Verified tags if they are supported by the platform. It’s difficult to add mass AI reviews while buying the product.
- Tools like Fakespot can help detect fake AI reviews on many popular websites like Amazon, eBay, Best Buy, etc. They aren’t perfect, but they can further confirm your suspicion.
AI-Generated Fake News and Propaganda
Similar to human-like fake reviews, AI can also mass-generate fake posts, comments, and news to manipulate public opinion. Huge networks of social media bot accounts can create posts or comments to favor specific products/services, political views, or finance opportunities. This can lead people to believe the opinion is true, assuming that not all the sources could be biased.
Image credit:
Freepik
It’s common to see comments by bots on Reddit, YouTube, or Facebook talking in detail about a service or product using 4-5 different accounts. They have the same conversation on related posts/threads to make people think the product/service is good.
Usually, cryptocurrency gets manipulated using AI bots. Bots start talking positively about a specific currency in social media posts, comments, news posts, or even deepfakes to increase the currency’s demand.
Detect fake news and propaganda
You’ll have to do some research on your own to ensure the opinion is correct. This means you need to not take the information at face value and look around for clues. Here’s what you can do:
- Identify fake news: to spot fake news, view the history of the news site and ensure it’s credible with reliable content. You can do a search on the topic to see if any other reputable news website is covering it or not. You can also get NewsGuard which will tell if your source is trustworthy.
- Identify fake comments: fake comments heavily agree/disagree with the matter and repeat specific statements. Like repeating the full name of the product/service instead of using pronouns.
- Identify fake posts: posts with disproportionately high engagement (likes or shares) compared to comments may involve bots amplifying the content. You can also check the user profile to see if it’s a legitimate account with a reliable history.
Mass AI-Powered “Pig Butchering” Scams
Pig butchering scam involves gaining the trust of someone through friendship or romantic relations and then asking to invest in fake financial schemes. Now factor in AI that can hold long conversations with almost any goals, and you get a mass pig butchering campaign.
Image credit:
Vecteezy
This is common on dating apps and social media where bot accounts use convincing fake profiles to talk to potential targets. They will hold conversations to gain the complete trust of the target and then talk about financial schemes that are lucrative, like cryptocurrency or forex platforms.
Detect pig butchering scams
Knowing who you are investing with and what you are investing in should be a basic check for any investment, and it will help here with AI pig butchering scams as well:
- If someone unknown forces talks about financing opportunities that seem too good to be true, it should raise an eyebrow.
- Try to meet the user in person or have a live video chat to ensure they are real. You can also try to reverse image search using their profile photo to ensure it’s not stolen.
- Check the investment platform yourself. Look for reviews, certificates, and whether it’s registered with a financial authority or not.
Staying vigilant and informed is your best defense against evolving AI-driven scams. Many of these scams can be avoided by questioning what seems too good to be true. However, if you have fallen for one, here’s what to do if you have been scammed.
Image credit: Freepik. Screenshot by Karrar Haider.
Karrar Haider –
Staff Writer
Karrar is drenched in technology and always fiddles with new tech opportunities. He has a bad habit of calling technology “Killer”, and doesn’t feel bad about spending too much time in front of the PC. If he is not writing about technology, you will find him spending quality time with his little family.
Subscribe to our newsletter!
Our latest tutorials delivered straight to your inbox
Sign up for all newsletters.
By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time. Subscribe