You see an ad on Instagram for a “flash sale” from an unknown e-commerce store. It features a “miracle” medical product or an “unbeatable” investment scheme. Your instinct tells you it seems “too good to be true.”
A recent leak of internal Meta documents, reported by Reuters, confirms your instinct is 100% right. And the reality is much more serious than you might think. This isn’t just about scammers getting away with fraud. It’s about a system designed to profit from them.
Meta uses the term “violating revenue” to describe the money it earns from ads that break its own rules or laws. Internal estimates show that this “violating revenue” accounted for 10.1% of Meta’s total revenue for 2024, which equals about $16 billion.
The documents show that on an average day, Meta displays an estimated 15 billion “high-risk” scam advertisements to its users. One internal memo, seen by Reuters, calculated that just one category of these scam ads generates about $7 billion in annualised revenue.
The company isn’t just aware of this; they’ve budgeted for it.
The Mechanism: How Meta Profits from Fraud
It’s surprising, but Meta does not aim to ban all fraudulent ads. Internal documents show that their business model is designed to allow some fraud to continue. Here’s how it works:
Step 1: The 95% “Certainty” Threshold
When a new advertiser wants to place an ad, Meta’s automated tools check it. However, according to reports, Meta only automatically bans ads if it is 95% sure they are fraudulent. Which means, if the system is only 90%, 80%, or even 50% sure that an ad is a complete scam, the policy is to let it run. Think about that!
Step 2: The “Scam Tax”
For advertisers considered “suspicious but not 95% certain,” Meta doesn’t just allow their ads. Instead, it charges them higher rates as a penalty. This is called a “penalty bid.” Basically, Meta flags these “likely scammers” and charges them more to place their ads. The company is willingly taking a share of the fraud.
The Algorithm: A Feedback Loop from Hell
This is where users get trapped. When you, your aunt, or your friend clicks on a scam ad out of curiosity, Meta’s ad algorithm doesn’t see a “vulnerable user.” It sees “engagement.” The algorithm only aims to show you more of what you engage with.
So, it starts to fill your Facebook and Instagram feeds with even more similar, high-risk, and fraudulent content. The system is designed to find out what you might fall for and show you more of it.
The most shocking part is that Meta’s own employees know this is happening. A May 2025 report from Meta’s own safety division (leaked to journalists) concluded that its platforms were involved in one-third of all successful scams in the United States. Another internal review was even more blunt: “It is easier to advertise scams on Meta than on Google.”
This is a full-blown crisis, and regulators are starting to see it. The U.S. Securities and Exchange Commission (SEC) is reportedly investigating Meta’s role in facilitating financial scam ads. A UK regulator found that in 2023, 54% of all payment-related scam losses in the country were linked to Meta’s platforms—more than all other social platforms combined.
The (Weak) Defence
When Reuters presented these findings, a Meta spokesperson, Andy Stone, said the documents “present a selective view” and that the 10.1% estimate was “rough and overly-inclusive.” He insisted, “We aggressively fight fraud and scams because… we don’t want it either.”
But the leaked documents tell a different story. They show a company that, in early 2025, had a rule for its ad-vetting team: they were not allowed to take enforcement actions that might cost the company more than 0.15% of its total revenue.
There was, in effect, a budgetary cap on how much fraud-fighting was allowed.
Meta may have removed 134 million “scam ad posts” in 2025, as it claims. But the $16 billion in “violating revenue” suggests that for every ad taken down, a wave of others is allowed to run, all paying their “scam tax” on the way in.
