1. Home
  2. Marketing Glossary

Web scraping

What is web scraping?

Web scraping, which is usually translated as “web page scraping”, consists of automatically browsing a website and at the same time extracting the data found to subsequently analyze and manipulate the extracted data based on certain parameters. .

The application or software created to scrape is called a bot, spider or crawler. Many websites try to protect themselves from these applications to safeguard their data. A Captcha could be a good example, which are in many subscription forms and that prevent, not only that our subscriber database collect false email accounts, they also prevent crawlers from accessing certain areas of a website.

3. Objectives of web scraping?

The information obtained is very valuable, therefore “data scraping” is carried out with very diverse objectives, it could be said that they are infinite, as are the possibilities of data mining, however some of the most common are:

  • Creating email databases is perhaps one of the most obvious uses and then with those addresses they create databases for spamming users.
  • Knowing the competitors, since by scraping their website you obtain data that you can’t see at first glance and that is very valuable to position your brand in the market.
  • Control and comparison of online offers, be aware at all times of the offers that are being offered on other websites.
  • Generate alerts, just to monitor aspects that we are interested in controlling on a website. Locate links that don’t work, in order to solve the issue and improve the SEO strategy.
  • Monitor the prices of the competition and locate trends, with this you can determine the pricing strategies of the websites and react to it if necessary.
  • Keep in mind any changes to a website, with which we are aware of any changes made to our website or others.
  • Track online reputation and online presence, thanks to which it is possible to know the rank that web search engines give to the entries of a specific blog.
  • Checking product lists. If you manage an ecommerce, it is very interesting to know the composition of the competitor’s product lists, to improve your store.
  • Collect data from several websites and compare them, to have data on the trends and techniques used by those websites in various aspects of interest.

2. Is web scraping legal?

This question is very common and the answer is that sometimes it is legal and sometimes it is not.

In other words, scrapers must always take into account the intellectual property rights of the website so that this cannot be considered illegal, and it is legal as long as the data obtained is freely available to third parties on the website itself.

In many situations, website owners offer the use of an API so that it is not necessary to scrape, and easily obtain the data. Nobody, or almost nobody, bothers that the Google crawler accesses your website to index its contents and, with it, rank the page in the SERPs. To scrape legally, these aspects should be taken into account:

  • The collected data cannot be used for illegal or harmful purposes.
  • The intellectual and legal property rights of the website should always be respected.
  • If user registration or a usage contract is required, such data may not be collected by scraping.
  • Website owners have the right to place technical impediments to prevent web scraping and they should not be ignored.

3. How can we protect pages against web scraping?

Even if you explicitly state on your website that you don’t allow web scraping, there will always be people who will try to do so, so it is necessary that you implement a series of actions to protect yourself, such as:

  • Adapting the .htaccess file according to the patterns of the IPs that try to do web scraping, that is: blocking them.
  • Control incoming requests, for this the identification of IPs and their filtering in the firewall is a very valid measure to try to avoid “scraping” of your website.
  • Detecting hotlinking to avoid it, not allowing the use of our server resources in unauthorized situations.
  • We can limit requests by IP address, so an attacker can’t establish multiple connections from the same IP.
  • Modifying the structure of the HTML, since crawlers focus on parsing HTML, changing it frequently makes it difficult for an attacker to easily scrape your website.
  • You could offer an API, so you can monitor and restrict the data that can be extracted from your site. That won’t prevent malicious web scraping but it greatly reduces the number of times our website faces data scraping.
  • Use honeypots or links to false content, that is, specific content that will not be visible on your website to a normal visitor. Thus detecting unwanted crawlers, it is necessary to disable those links in the robots.txt file for search engine bots.
  • Using Request Falsification Tokens (CSRF) to prevent bot automations from making abusive requests.