It’s becoming more common to hear about IoT security – or the lack thereof – in the news, and computers and IoT devices are frequently targeted by hackers for “bot” employment to perform distributed denial of service (DDoS) attacks, application exploits and credential stuffing.
Non human traffic or bot traffic represents currently more than 60% of the total traffic going to web sites.
Those bots come in a variety of forms, making it extremely important to distinguish between the infected hosts that often make up botnets to perform various malicious activities, to the legitimate bots that are extremely important in driving customer traffic to your site (Googlebot, for example).
Different Types of Bot Attacks on Web Services
Websites that contain pricing information and proprietary information are especially vulnerable to bot traffic.
An example of a content scraping process can be seen when airline companies use bot farming to scrape price information from competitive airline company sites. They use this information to dynamically price similar products — once they find out what a competitor is charging, they can price their services lower to gain a market advantage.
A more malicious use includes deploying a botnet that seeks out vulnerabilities in website technology and stores this as a vulnerable site, ripe for exploitation.
Bots are a Growing Crisis
In the past, bot attacks weren’t nearly as sophisticated and powerful as they are now. During the mid-1990s, for example, the typical attack consisted of 150 requests per second. At the time, this was enough to bring down numerous systems. Now, due to the sheer size of modern botnets, the average attack generates over 7,000 requests per second.
Last year, we all witnessed many large scale attacks such as the DDoS attack against Oracle DYN, formerly Dyn DNS, which was hit with a flood of DNS queries from tens of millions of IP addresses at once. This was an attack executed by the Mirai botnet, which infected over 100,000 IoT devices and targeted tech giants like Netflix, Amazon, Spotify, Tumblr, Twitter, Reddit and OVH.
Because bot attacks are becoming more common (and dangerous), it’s crucial that every IT professional take proactive measures to combat malicious bot activities. Here are a few tips that can help in the fight against bots:
1. Separate the Bad Bots From the Good Bots
Bots are often lumped together into one big group, but there are good bots and there are bad bots. The bad ones are likely to attack your website and cause harm, but the good ones — like Googlebot — help make the internet a safer, more efficient place.
For that reason, you can’t simply block all bots in hopes of avoiding an attack. Instead, you need to categorize and allow good bots, whilst limiting and managing the bad ones.
Commonly a captcha is used to address basic bot attacks. As this requires “human” interaction to process it is seen as a good starting point. However, captchas are also seen as an inconvenient blocker in a user’s site experience.
2. Take Advantage of the Latest Technology in Security
Traditional rate limiting and captcha is no longer enough on its own and many companies have introduced Javascript challenges to establish the legitimacy of the origin of the request.
Using behavioural analysis of incoming requests combined with device fingerprinting enables companies to distinguish between infected hosts, whilst acting transparently and not impacting the browsing experience.
Leaseweb distributed cloud security platforms can cope with large volumes of traffic, connections, further protecting yourself from bot attacks.
3. Utilize Artificial Intelligence
It’s only a matter of time before attackers learn more sophisticated manners to collect data and replicate real user behavior more accurately. For this reason, numerous companies are trying to employ machine learning models to detect patterns and anomalies.
Leaseweb use models that work to inspect data at a rate not humanly possible, while simultaneously developing more sophisticated models to combat the endlessly changing bot technologies.
As a conclusion, our advice is that companies need to take proactive steps to actively stop malicious bots without compromising the availability of their web assets. Leveraging behavioral controls rather than static rules is far more effective as we work to control the rise of the bots.