What are bots?

Whenever you’re browsing web pages, there’s a high probability that one or more bots are browsing that same page simultaneously as you. But, have you ever wondered – what are bots? And what’s their purpose? Are they harmless or can they cause damage, and should you be worried?

Don’t worry, we know there are a lot of questions to answer and unfold. That’s why we’ve written this article. We want to shed some light on what bots are and how they work. 

What are bots?

Whether you’re a website owner or someone who browses the internet a lot (and who doesn’t these days?) you should be interested in learning more about bots. So stick around and let us answer some of the most important questions. 

So, what ARE bots?

First things first – let’s answer the question. What are bots? 

An (internet) bot is a software application (sometimes referred to as a program) that performs automated tasks (on the Internet). 

Usually, those tasks can be performed by humans, but bots are much more efficient and time-effective than when it comes to it. 

In most cases, bots are given pre-defined instructions on how to perform specific tasks. This significantly reduces the need for human intervention, which only occurs when the instructions weren’t “clear” enough or when the bot encounters an unexpected obstacle while performing its task. 

Bots usually perform repetitive and not too complex tasks. An example would be visiting a web page and performing an action.

What may come as a surprise is that more than half of the Internet traffic is made up of bots.

Good and Bad Bots – What’s the Difference?

Bots themselves – as automated programs – can’t have an inherent “good” or “bad” attribute. 

When talking about good and bad bots, we need to have in mind what their primary use case is. 

This will decide whether we’re categorizing a bot as good or bad. 

However, even then, there’ll always be a zone where we won’t be able to define whether something is good or bad. 

Good vs Bad Bots

What are the good bots?

As we’ve already mentioned – bots can automate certain tasks and do them much more efficiently than humans. 

Some bots can even interact with humans. The purpose of good bots is to save resources and make life easier for both customers and business owners. 

Here are some of the most common bots that are commonly considered to be good:

  • Chatbots – are most frequently used as a replacement for real humans, usually customer support agents. The majority of the chatbots are able to either exchange text messages or communicate via text-to-speech. Some of the first chatbots were ELIZA and PARRY. The purpose of chatbots is to save resources but deliver almost the same (or even better) service than a real human would.
  • Social Media Bots – nowadays, numerous things are labeled as social media bots. However, not all of them are correctly labeled. For example, an account with a fake persona that is being run by a real human is not a bot account. It’s just a fake account. Social Media Bots can be used in a similar way as chatbots – to answer direct messages on social media, and provide useful links to your customers. However, in the domain of politics, there are allegedly some intelligent bots, that can come up with a text post on their own. 

What are the bad bots?

As opposed to good bots, bad bots are created with a goal of causing some sort of damage to someone or something. Most often, bad bots will have a goal of causing economic damage. 

However, other types of of malicious intent include identity theft, crashing websites, exploiting security vulnerabilities, and more.

These are some of the most common types of bad bots today:

  • Bots for (D)DoS Attacks – the purpose of a DDoS attack is to make a certain service unavailable to its intended users. This is done by creating a botnet – usually by infecting a large number of unsuspecting users’ devices with malware software. This software can then be used to send requests and perform other actions from the device it is installed on. More common ways of infecting a device with this type of malware include suspicious mails and scam websites.
  • Spambots – these bots post or send unwanted content all over the internet. From social media platforms to e-mails and direct messages. Their aim is always malicious, but with varying degrees. They can either post overly promotional content, or they can be used for phishing and scam purposes. Different platforms deal with these bots differently – but it seems they can never completely solve the problem. 
  • Click Fraud Bots – these bots can be used for different purposes. The two most common use cases are:
  • Pay-per-click (PPC) advertising damage – by using a bot to click on ads of their competitors, it’s possible for the perpetrator to cause significant economical damage to the target. PPC advertising essentially works (as the name suggests) by paying for each click on your ad. Whenever a bot clicks on an ad, the advertiser is charged, but the click isn’t genuine and there’s no chance of recovering that cost.
  • Distorting vanity metrics – the number of likes, followers, and shares can easily be manipulated by click bots in order to appear larger than it actually is. This use case of click bots is slightly less dangerous than the first one but still important to have in mind. Any type of numerically expressed social proof on the Internet can be faked. 
  • Credential Stuffing Bots – the goal of these bots is to gain access to poorly protected accounts on various platforms. Basically, what these bots do is try and stuff log-in forms with numerous common strings for usernames and passwords. This basically means if your username is “username123” and your password is “password123”, it’s almost certain that it’s not only you who has access to that account. 

The grey zone – good and bad bots

  • Web Scrapers/Web Crawlers – In short, web scraping is a process of extracting data from websites. Almost anything can be extracted – metadata, text content, images, etc. Google, for example, is a web scraping bot. It extracts all types of data from websites and then indexes them and ranks them on its search results page. However, there are also cases of unwanted web scraping. One of the most famous cases was that of LinkedIn and HiQ Labs, back in 2019. Web scraping was found to be legal (at least in the US), but it did suffer some damage to its reputation. 
  • Purchasing Bots – these bots are used to purchase items from an online store very quickly (a matter of milliseconds). At the first glance, these bots seem like a nice convenient thing to have. However, we’ve placed them in the grey zone because they are mostly used for purchasing very limited items, and by using them the regular customers who try to make a purchase on their own, have virtually no chance. 
  • Vulnerability Scanning Bots – these bots scan websites for vulnerabilities and then report their findings. However, the distinction between good and bad vulnerability scanning bot is inextricably linked to who it is reporting to. If it’s the owner of the website who can then work on those vulnerabilities and improve the overall website protection, then we can say it’s a good bot. However, if the report goes to an extortionist for example, then such bot would be classified as a bad one. 

Conclusion

We’ve answered the question of what are bots and also listed some of the most common bot types.

With all this in mind, it’s clear why we should remain vary of how every individual website is protected or not from the bad (or malicious) bots.

We can know this with BotMeNot, a tool made specifically for evaluating how protected a certain website is from (primarily scraping) bots.