Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
Meta, the social network giant with nearly 4 billion users, is introducing facial recognition technology in an effort to combat the growing issue of fake celebrity scam ads that have plagued its platforms. In an October 21 statement, the company revealed that early testing with a small group of celebrities has yielded encouraging results. Meta plans to expand the trials to a wider group of 50,000 celebrities and public figures over the coming weeks.
The new system works by comparing images from the advertisements with the official Facebook and Instagram profile pictures of the celebrities in question. If the system identifies a match and determines the ad to be a scam, Meta will block it. This is part of Meta’s larger effort to curb the rise of scams targeting users with impersonations of well-known public figures. Celebrities like Tesla CEO Elon Musk, American TV host Oprah Winfrey, and Australian billionaires Andrew Forrest and Gina Rinehart have previously been used as bait in fraudulent ads.
Meta acknowledges that these so-called “celeb-bait” scams are a serious issue, not only for the individuals impersonated but also for users of its platforms. Scammers often use these ads to trick people into providing personal information or handing over money. Meta emphasized that this type of scam violates its policies and is detrimental to its user base. As part of its efforts to enhance protection, Meta will soon begin sending in-app notifications to targeted celebrities, informing them that they have been enrolled in the new protection system. Celebrities will have the option to opt out if they choose.
This development comes at a time when Meta must tread carefully. The company recently reached a $1.4 billion settlement with the state of Texas after being accused of using biometric data from its residents without proper consent. To address concerns about privacy, Meta has stated that it will immediately delete any facial data generated during the process of determining whether an ad is a scam.
In addition to addressing celebrity impersonation scams, Meta is looking to extend the use of its facial recognition technology to help users verify their identities and regain access to compromised accounts. While this technology holds potential for bolstering security, Meta’s previous run-ins with data privacy issues have made some wary of its approach.
Despite a surge in cryptocurrency scam ads on Facebook, Meta recently refuted claims from Australia’s consumer regulator that nearly 60% of crypto investment schemes on the platform in August were fraudulent. Many of these scams reportedly use AI-generated deepfakes, a new and more sophisticated method for luring victims into investing in bogus cryptocurrency ventures.
Meta’s latest initiative demonstrates its intention to ramp up security measures and take action against increasingly advanced online scams. However, the company’s approach to facial recognition will likely be scrutinised, particularly in light of recent privacy concerns. As the technology continues to evolve, Meta must balance its commitment to protecting users from scams with the need to ensure ethical and transparent use of biometric data.
Quietly but confidently, the Internet Computer Protocol (ICP) is getting a serious security boost. Behind the scenes, engineers are shaping up support for AMD’s Secure Encrypted Virtualisation (SEV), marking a bold move in how ICP handles replica upgrades and state encryption. The work...
Bittensor's recent innovations around subnets, Alpha tokens, and validator strategies have introduced a new level of sophistication to its AI-powered decentralised network. For newcomers and even seasoned stakers, understanding how to engage with these subnets...
Coinbase users were left frustrated as Solana transactions faced significant delays, with some reporting hours-long waits amid a surge in activity following the launch of memecoins by former US President Donald Trump and his wife,...
AI isn’t here to put software engineers out of a job. It’s here to give them a lot more work. The arrival of the Self-Writing Internet isn’t a farewell party for coders; it’s an invitation...
DFINITY’s Chief Technology Officer, Jan Camenisch, has received the 2025 PKC Test-of-Time Award, joining forces with fellow cryptographers Markulf Kohlweiss and Claudio Soriente. This...
Querio, an on-chain Web3 search engine, is revolutionising the discovery of decentralised applications (dApps) across multiple blockchains. By enabling seamless searches...