In a world increasingly shaped by digital interactions, the dangers of online exploitation, particularly sextortion, loom large over young adults and teens. Sextortion is a form of cybercrime where scammers threaten to expose someone’s intimate images unless they receive payment or further compromising content. These criminals target young, vulnerable users, exploiting the anonymity of the internet. In response to this growing threat, Meta has unveiled new safety features across its platforms, aiming to disrupt sextortion efforts and protect its users from such harmful schemes.
Meta’s latest measures build on existing protections while implementing innovative tools to stay ahead of these scammers. From blocking risky follow requests to enhanced screenshot prevention, the company is fortifying its platforms to safeguard teens and curb sextortion-related crimes.
Reinforced Digital Walls: Blocking Scammy Accounts
A major part of Meta’s new strategy involves making it harder for suspicious accounts to engage with teens. The company has implemented stricter controls on follow requests, especially from accounts that display scam-like behavior. Indicators such as recently created profiles or accounts that engage in patterns of suspicious activity will either be blocked from sending requests to teens altogether or have their requests diverted to spam.
In an effort to cut off key avenues for exploitation, Meta has also restricted these high-risk accounts from viewing followers and following lists. Typically, sextortion criminals use these lists to target their victims’ contacts or mutual friends, creating pressure through manipulation. By limiting access, Meta is making it more difficult for these criminals to execute their schemes.
Say Goodbye to Screenshots: Protecting Ephemeral Content
To counter another common tactic used by sextortion criminals—capturing screenshots of intimate images—Meta is implementing a feature that blocks users from taking screenshots or screen recordings of temporary content. Instagram Direct and Messenger’s ‘view once’ and ‘allow replay’ options are often used to send private photos or videos. Now, users can send such content without worrying about unauthorized captures. Additionally, Meta has taken extra steps to prevent web-based circumvention by disallowing ‘view once’ content from being opened on Instagram’s web interface.
These features underscore Meta’s proactive approach to protecting user privacy, particularly for those most at risk of sextortion.
Leveraging AI and Collaboration to Identify Threats
Another cornerstone of Meta’s new strategy is the expanded use of artificial intelligence to detect scam patterns. By studying recurring elements in scam profiles and behavior, Meta’s technology now automatically flags and removes sextortion-related accounts faster and more effectively. This proactive approach aims to prevent scams before they escalate, giving users added peace of mind.
The company is also sharing these learnings with industry peers through the Tech Coalition’s Lantern program, a collaborative initiative where tech companies exchange insights on combating online threats. This shared knowledge equips other platforms with tools to protect their users, amplifying the collective effort to shut down sextortion rings.
Global Rollout of Nudity Protection
After testing the feature in selected markets, Meta is now globally launching its nudity protection filter for Instagram Direct Messages (DMs). This feature automatically blurs images flagged as containing nudity when shared in chats, helping teens avoid unwelcome exposure to explicit content. For users under 18, the filter will be enabled by default, and Meta has partnered with safety experts like Larry Magid of ConnectSafely to develop resources for parents, ensuring they understand the tool and how to protect their children online.
This rollout complements Meta’s broader initiative, the Teen Accounts, which enforces stricter message settings for underage users. These accounts limit communication to only those teens have personally followed, creating additional barriers to unwanted contact from strangers or potential scammers.
Cracking Down on Yahoo Boys
Sextortion schemes often intersect with organized criminal activity. Recently, Meta targeted the Yahoo Boys, a group notorious for fraud and exploitation. In just a week, the company removed over 1,600 Facebook groups and accounts linked to this group. This crackdown follows a similar operation in July, where around 7,200 assets tied to Yahoo Boys were purged. The group’s activities fall under Meta’s Dangerous Organizations and Individuals policy, one of the company’s strictest enforcement policies.
By refining its processes for identifying and removing these accounts, Meta aims to increase the speed and efficiency of its actions against such organized crime.
The Road Ahead: Ongoing Commitment to Safety
Meta’s multi-pronged approach to combatting sextortion demonstrates its ongoing commitment to user safety, especially for vulnerable teens. From technological innovations to collaborations with industry peers, the company is evolving its strategies to stay ahead of criminals exploiting the digital space.
While no system is foolproof, these new features represent a significant leap forward in the fight against sextortion. As the digital landscape continues to shift, Meta is positioning itself at the forefront of efforts to protect young users from these predatory schemes, ensuring that the internet remains a safer space for all.