1 min read

Human like bots perform online fraud

Nikhil Taneja Managing Director-India, SAARC & Middle East
2019 saw login pages as prime targets for fraudsters across different verticals. They are using bad bots to carry out two types of online fraud: (1) account takeover to steal PII and payment card details (2) fake account creation to validate stolen payment card details (carding attacks) or cash out stolen cards.
For online businesses and their customers, the growing threat of online fraud is a real concern. With stringent regulations on data and privacy such as GDPR and CCPA, online fraud is not just a business issue, but a legal challenge as well. For many organizations, a data breach means ceasing to exist due to massive fines under new data security regulations.
The real challenge here is not weak data and payment security opted by the organization. With every measure online merchants take to tighten security and thwart malicious activities, cyber criminals seem to up their game and outwit them. Online businesses today are faced with a tireless legion of bots that can bypass security defenses to perform fraud.
The bad bots that perform online frauds are highly sophisticated and can mimic human behavior. According to the Big Bad Bot Problem 2020 report, 62.7% of bad bots on the login page can mimic human behavior. That means these bots can take over user accounts or can even create fake accounts to perform carding or cashing out attacks. Similarly, 57.5% of bad bots on the checkout page can simulate human behavior when performing carding attacks
Online Fraud During the Coronavirus Pandemic
While the world struggles to find a cure for coronavirus, even healthcare organizations are under cyber-attack. We observed a spike in bot activity against e-commerce, entertainment, and BFSI in March. Cybercriminals are targeting e-commerce and financial services institutions with account takeover attacks during this pandemic
We recommend following the action plan to spot and prevent online fraud:
Constantly monitor traffic sources and restrict login attempts per session/user/IP address/device.
Develop competencies to detect automated behavioral patterns of users and deploy systems that can detect the intent of automated traffic distributed across multiple sessions and sources.
Building an accurate bot detection engine is a tightrope act. If you try to eliminate false negatives, you end up with few false positives — and vice versa.  Lack of historical labeled data is one of the major concerns for an accurate detection system. The best approach for an organization that is trying to build an ML-powered automated bot management solution is to create a closed-loop feedback system that dynamically improves the machine-learning models based on signals collected directly from end-users.
Monitor and restrict social media login. Ensure that users have unique passwords and educate users about password re-use to prevent credential stuffing and credential cracking attempts