1. Home
  2. Marketing Glossary

Crawling

What is crawling and types of web crawlers?

We could say that crawling is the work done by a spider, bot, robot or crawler (being any of the indexing robots of search engines) that aims to locate, read and analyze the content present on a web page.

Linked to this action of “research” of the crawling spiders is the term crawl budgets, which is the time that a spider or bot invests in analyzing the web completely, time that is controlled by the applications of the search engine itself in charge of deciding the websites to crawl and the time that will be used for each page.

1. Types of web crawlers

There are different types of web crawlers, which have their main difference in the use for which they have been conceived. But in general, we can finde these types of web crawler:

– Search engine web crawler. They are the best known and most commonly used ones. Each search engine has its own web crawler to examine, collect and index the content of websites.

Some of the most popular ones are: GoogleBot (Google), Bingbot (Bing), Slurpbot (Yahoo), DuckDuckBot (DuckDuckGo), Baiduspider (Baidu), Yandex Bot (Yandex) and Alexa Crawler (Amazon).

– Commercial web crawlers. These are web crawlers with many features and uses, which are created by software companies that will sell them to other purposes.

– Desktop web crawler. These are web crawlers that can be run on a PC or laptop, they are usually low cost and have a very limited use, they can usually crawl small amounts of information and websites.

– Web crawler in the cloud. These are web crawlers that don’t store data on local servers, they do so in a cloud and are usually offered as services by software companies. Their main advantage is that they are scalable.

– Custom Web crawlers. They offer a very simple feature and are used by companies on very specific tasks. An example would be those that monitor possible website or service crash.

2. Aspects of crawling: Google

Crawling is a vital part of web ranking and SEO, therefore it fulfills an important purpose in this aspect. It is known that Google handles more than 200 variables that determine how search results are ranked.

The operation of the different algorithms isn’t known, logically, what is known are some of the criteria that are taken into account to position a web page, many of them are obtained through crawling, among which are:

  • Domain age.
  • External links that refer to the domain.
  • How easy it is for the web crawler to monitor the domain.
  • The page structure.
  • *The territorial extension of the domain.
  • The quality of the contents and updates.
  • The existence or not of errors in the HTML code.
  • If it is optimized for mobile devices (smartphones and tablets).
  • The website loading speed.

There are more conditions for SEO ranking than Google makes public and, in each update of its algorithm, they change or lose relevance, therefore it is necessary to be very attentive after the updates of its ranking algorithm.