A new phishing attack has been discovered to be using Facebook Messenger chatbots to impersonate the company’s support team and steal credentials used to manage Facebook pages.
In a new campaign discovered by Trustwave, threat actors use chatbots to steal credentials for managers of Facebook pages, commonly used by companies to provide support or promote their services.
First and foremost, an email is sent informing the recipient that their Facebook page has violated Community Standards, giving them 48 hours to appeal the decision, or their page will be deleted. “This malicious email claims that the user’s page is about to be terminated due to a violation of Facebook’s community standards,” the report claims.
Supposedly, the user is offered a chance to resolve the problem in Facebook’s Support center, and to access it, they’re urged to click on an “Appeal Now” button. Clicking that button takes the victim to a Messenger conversation where a chatbot impersonates a Facebook customer support agent.
The Facebook page associated with the chatbot is a standard business page with zero followers and no posts. However, if a victim checked the profile, they would see a message stating that the profile is “Very responsive to messages,” indicating that it is actively used. According to the report, “The user must be logged into the platform to engage with the chatbot. If not, it prompts the user to log in to Facebook.”
The chatbot will send the victim an “Appeal Now” button on Messenger, which takes victims to a website disguised as a “Facebook Support Inbox,” but the URL is not part of Facebook’s domain.
Also, as Trustwave notes, the case number on that page doesn’t match the one presented by the chatbot earlier, but those details are still not likely to expose the fraud to panicked users.
The main phishing page, shown below, requests users who want to appeal the page deletion decision to enter their email address, full name, page name, and phone number.
Once this data is entered in the fields and the “Submit” button is pressed, a pop-up appears requesting the account password. After that, all information is sent to the threat actor’s database via a POST request.
Finally, the victim is redirected to a fake 2FA page where they are urged to enter the OTP they received via SMS on the provided phone number. That page will accept anything, so it’s just there to create a false sense of legitimacy in the whole process.
After the verification, the victims land on an actual Facebook page containing intellectual property and copyright guidelines that are supposedly relevant to the user’s violation.
Because the phishing attack is automated, the actual exploitation of the stolen credentials may come at a later phase, so the threat actors need to create this false sense of legitimacy in the victims’ minds to delay any breach remediation actions.
Threat actors increasingly use chatbots in phishing attacks to automate the stealing of credentials and to increase the volume of their operations without spending considerable resources or time. Chatbots are programs that impersonate live support people and are commonly used to provide answers to simple questions or triage customer support cases before they are handed off to a live employee.
These types of scams are harder to detect, as many sites utilise AI and chatbots as part of their support pages, making them seem normal when encountered when opening support cases.
As always, the best line of defense against phishing attacks is to analyse any URLs for pages requesting login credentials, and if the domains do not match the legitimate site’s regular URL, then do not enter any credentials on such a site.