Skip to main content

Meta turn to facial recognition tech to combat scams


Meta will send notifications within its apps to public figures and celebrities, letting them know they are part of a new experiment and that they can choose to opt-out.

Agranovich, the Director of Global Threat Disruption at Meta, explained that if Meta suspects an ad or account might be a scam using a celebrity's image, they will use facial recognition technology (FRT) to compare the celebrity’s face from their Facebook or Instagram profile picture to the one in the ad. If there’s a match and the ad is confirmed to be a scam, Meta will block it. He mentioned that this process is quicker and more accurate than human reviews.

The second use of FRT is for account recovery. Meta will use FRT along with video selfies to help users verify their identity more easily when trying to regain access to hacked accounts. Sometimes, users lose access to their Facebook or Instagram accounts if they forget their password, lose their device, or get tricked by a scammer.

If an account is compromised, users need to confirm their identity by uploading an official ID or document with their name. Now, Meta is testing video selfies to help users verify their identity. Users will upload a video selfie, which will be compared to the profile pictures on their account, similar to how phones unlock.

Agranovich emphasized that the video will not be visible to anyone on Facebook or Instagram, and Meta will delete any facial data right after the comparison, whether there's a match or not.

This pilot will help us more effectively remove scam ads featuring celebrities and make it easier for people to recover hacked accounts,” Agranovich said. He also noted that this is just one part of Meta's overall strategy to combat scams and cybersecurity threats across its platforms.

 Scam Check Overview

1.Initial rollout for celebrities affected by scams

2.Public rollout in the coming weeks

3.In-app notifications sent to enrolled public figures

4.Opt-out option available

5.FRT will compare celebrity faces from their profile pictures in suspected scam ads/accounts

6.Ads/accounts will be blocked if a match is confirmed

7.FRT will also assist in account recovery.







Popular posts from this blog

Cybersecurity Giant CrowdStrike Triggers Worldwide Computer to Blue Screen of Death

    Recently, a widespread issue has paralyzed computers globally, initially mistaken for a cyber attack. Speculations pointed fingers at Microsoft, as only Windows systems seemed affected. However, the real culprit was CrowdStrike, a major cybersecurity firm renowned for its endpoint protection services, akin to antivirus for corporate fleets of computers. The problem stemmed from an automatic update pushed by CrowdStrike, designed to enhance security through its endpoint sensors. Unfortunately, a critical bug slipped into the update, causing affected computers—running CrowdStrike's software—to crash irreparably. Since the sensors operate at a deep system level, the glitch caused entire systems to enter a continuous cycle of crashes, known ominously as the "blue screen of death." CrowdStrike quickly acknowledged the issue and provided a fix, albeit a cumbersome one. Affected PCs must be manually booted into safe mode to remove specific files, a process that needs to be r...

Grok 3: The AI Chatbot Breaking Boundaries with Bold, Uncensored Responses

  In the ever-evolving world of artificial intelligence, Grok 3 is quickly making waves both for its cutting-edge capabilities and its shocking, unfiltered personality. Developed by Elon Musk’s xAI, Grok 3 is an AI chatbot that has taken the internet by storm, especially among regular X (formerly Twitter) users in India. Known for its snarky responses, irreverent tone, and ability to learn from the unpredictable and sometimes profane language of users, Grok 3 is far from your average chatbot. Launched in February 2025, Grok 3 is a powerhouse of computational prowess, utilizing 12.8 trillion tokens to deliver responses that range from wildly intelligent to oddly rebellious. It’s trained with data from a variety of sources everything from legal filings to X posts giving it a diverse range of knowledge and a unique ability to engage in conversation that feels real, yet sometimes, unsettlingly raw. But it’s not just Grok 3’s wealth of knowledge that’s making headlines. The chatbot ...

AI tools on the dark web

  As AI continues to develop, its role in cybercrime on the dark web will only increase. The ability of cybercriminals to experiment with AI-powered tools is a new frontier in the ongoing war between hackers and cybersecurity experts. The Dark Web is quickly becoming a testing ground for new AI-powered attacks. The bad actors can customize their methods and expand their scope of their crimes. 1. FraudGPT – When AI is the worst spammer FraudGPT is a tool that sends fake emails. Create a fraudulent website And it spreads malware like a 24/7 scam operation. It's so clever it can trick you into handing over sensitive information to hackers—just like your grandma's bank details! If installed correctly, it does not require too much energy to operate. 2. Angler AI – A fishing tool that personalizes your attacks. Angler AI is a secret tool. That changes perspective depending on how you respond. It's like a telemarketer who knows everything about you and can even pretend to be y...