Requests and Responses

Scrapy uses Request and Response objects for crawling web sites.

Typically, Request objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a Response object which travels back to the spider that issued the request.

Both Request and Response classes have subclasses which add functionality not required in the base classes. These are described below in Request subclasses and Response subclasses.

Request objects

Creating requests that submit HTML forms

Use form2request to build request data from an HTML <form> element and convert it to a Request.

Install it with pip:

pip install form2request

Select the desired form with CSS or XPath, then build and convert request data:

from form2request import form2request


def parse(self, response):
    form = response.css("form#search")
    request_data = form2request(form, data={"q": "scrapy"})
    yield request_data.to_scrapy(callback=self.parse_results)

Use data to override field values. To drop a field from the resulting request, set its value to None.

By default, form2request simulates clicking the first submit button. To submit without clicking any button, pass click=False. To click a specific submit button, pass its element:

def parse(self, response):
    form = response.css("form#checkout")
    submit = form.css('button[name="pay"]')
    request_data = form2request(form, click=submit)

Using form2request to simulate a user login

It is usual for web sites to provide pre-populated form fields through <input type="hidden"> elements, such as session related data or authentication tokens (for login pages). Build the request from the form and only override the credentials:

import scrapy
from form2request import form2request


class LoginSpider(scrapy.Spider):
    name = "example.com"
    start_urls = ["http://www.example.com/users/login.php"]

    def parse(self, response):
        form = response.css("form")
        request_data = form2request(
            form,
            data={"username": "john", "password": "secret"},
        )
        yield request_data.to_scrapy(callback=self.after_login)

    def after_login(self, response): ...

Passing additional data to callback functions

The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the downloaded Response object as its first argument.

Example:

def parse_page1(self, response):
    return scrapy.Request(
        "http://www.example.com/some_page.html", callback=self.parse_page2
    )


def parse_page2(self, response):
    # this would log http://www.example.com/some_page.html
    self.logger.info("Visited %s", response.url)

In some cases you may be interested in passing arguments to those callback functions so you can receive the arguments later, in the second callback. The following example shows how to achieve this by using the Request.cb_kwargs attribute:

def parse(self, response):
    request = scrapy.Request(
        "http://www.example.com/index.html",
        callback=self.parse_page2,
        cb_kwargs=dict(main_url=response.url),
    )
    request.cb_kwargs["foo"] = "bar"  # add more arguments for the callback
    yield request


def parse_page2(self, response, main_url, foo):
    yield dict(
        main_url=main_url,
        other_url=response.url,
        foo=foo,
    )

Caution

Request.cb_kwargs was introduced in version 1.7. Prior to that, using Request.meta was recommended for passing information around callbacks. After 1.7, Request.cb_kwargs became the preferred way for handling user information, leaving Request.meta for communication with components like middlewares and extensions.

Using errbacks to catch exceptions in request processing

The errback of a request is a function that will be called when an exception is raise while processing it.

It receives a Failure as first parameter and can be used to track connection establishment timeouts, DNS errors etc.

Here’s an example spider logging all errors and catching some specific errors if needed:

import scrapy

from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError, TCPTimedOutError


class ErrbackSpider(scrapy.Spider):
    name = "errback_example"
    start_urls = [
        "http://www.httpbin.org/",  # HTTP 200 expected
        "http://www.httpbin.org/status/404",  # Not found error
        "http://www.httpbin.org/status/500",  # server issue
        "http://www.httpbin.org:12345/",  # non-responding host, timeout expected
        "https://example.invalid/",  # DNS error expected
    ]

    async def start(self):
        for u in self.start_urls:
            yield scrapy.Request(
                u,
                callback=self.parse_httpbin,
                errback=self.errback_httpbin,
                dont_filter=True,
            )

    def parse_httpbin(self, response):
        self.logger.info("Got successful response from {}".format(response.url))
        # do something useful here...

    def errback_httpbin(self, failure):
        # log all failures
        self.logger.error(repr(failure))

        # in case you want to do something special for some errors,
        # you may need the failure's type:

        if failure.check(HttpError):
            # these exceptions come from HttpError spider middleware
            # you can get the non-200 response
            response = failure.value.response
            self.logger.error("HttpError on %s", response.url)

        elif failure.check(DNSLookupError):
            # this is the original request
            request = failure.request
            self.logger.error("DNSLookupError on %s", request.url)

        elif failure.check(TimeoutError, TCPTimedOutError):
            request = failure.request
            self.logger.error("TimeoutError on %s", request.url)

Accessing additional data in errback functions

In case of a failure to process the request, you may be interested in accessing arguments to the callback functions so you can process further based on the arguments in the errback. The following example shows how to achieve this by using Failure.request.cb_kwargs:

def parse(self, response):
    request = scrapy.Request(
        "http://www.example.com/index.html",
        callback=self.parse_page2,
        errback=self.errback_page2,
        cb_kwargs=dict(main_url=response.url),
    )
    yield request


def parse_page2(self, response, main_url):
    pass


def errback_page2(self, failure):
    yield dict(
        main_url=failure.request.cb_kwargs["main_url"],
    )

Request fingerprints

There are some aspects of scraping, such as filtering out duplicate requests (see DUPEFILTER_CLASS) or caching responses (see HTTPCACHE_POLICY), where you need the ability to generate a short, unique identifier from a Request object: a request fingerprint.

You often do not need to worry about request fingerprints, the default request fingerprinter works for most projects.

However, there is no universal way to generate a unique identifier from a request, because different situations require comparing requests differently. For example, sometimes you may need to compare URLs case-insensitively, include URL fragments, exclude certain URL query parameters, include some or all headers, etc.

To change how request fingerprints are built for your requests, use the REQUEST_FINGERPRINTER_CLASS setting.

REQUEST_FINGERPRINTER_CLASS

Default: scrapy.utils.request.RequestFingerprinter

A request fingerprinter class or its import path.

Writing your own request fingerprinter

A request fingerprinter is a component that must implement the following method:

fingerprint(self, request: scrapy.Request)

Return a bytes object that uniquely identifies request.

See also Request fingerprint restrictions.

The fingerprint() method of the default request fingerprinter, scrapy.utils.request.RequestFingerprinter, uses scrapy.utils.request.fingerprint() with its default parameters. For some common use cases you can use scrapy.utils.request.fingerprint() as well in your fingerprint() method implementation:

For example, to take the value of a request header named X-ID into account:

# my_project/settings.py
REQUEST_FINGERPRINTER_CLASS = "my_project.utils.RequestFingerprinter"

# my_project/utils.py
from scrapy.utils.request import fingerprint


class RequestFingerprinter:
    def fingerprint(self, request):
        return fingerprint(request, include_headers=["X-ID"])

You can also write your own fingerprinting logic from scratch.

However, if you do not use scrapy.utils.request.fingerprint(), make sure you use WeakKeyDictionary to cache request fingerprints:

  • Caching saves CPU by ensuring that fingerprints are calculated only once per request, and not once per Scrapy component that needs the fingerprint of a request.

  • Using WeakKeyDictionary saves memory by ensuring that request objects do not stay in memory forever just because you have references to them in your cache dictionary.

For example, to take into account only the URL of a request, without any prior URL canonicalization or taking the request method or body into account:

from hashlib import sha1
from weakref import WeakKeyDictionary

from scrapy.utils.python import to_bytes


class RequestFingerprinter:
    cache = WeakKeyDictionary()

    def fingerprint(self, request):
        if request not in self.cache:
            fp = sha1()
            fp.update(to_bytes(request.url))
            self.cache[request] = fp.digest()
        return self.cache[request]

If you need to be able to override the request fingerprinting for arbitrary requests from your spider callbacks, you may implement a request fingerprinter that reads fingerprints from request.meta when available, and then falls back to scrapy.utils.request.fingerprint(). For example:

from scrapy.utils.request import fingerprint


class RequestFingerprinter:
    def fingerprint(self, request):
        if "fingerprint" in request.meta:
            return request.meta["fingerprint"]
        return fingerprint(request)

If you need to reproduce the same fingerprinting algorithm as Scrapy 2.6, use the following request fingerprinter:

from hashlib import sha1
from weakref import WeakKeyDictionary

from scrapy.utils.python import to_bytes
from w3lib.url import canonicalize_url


class RequestFingerprinter:
    cache = WeakKeyDictionary()

    def fingerprint(self, request):
        if request not in self.cache:
            fp = sha1()
            fp.update(to_bytes(request.method))
            fp.update(to_bytes(canonicalize_url(request.url)))
            fp.update(request.body or b"")
            self.cache[request] = fp.digest()
        return self.cache[request]

Request fingerprint restrictions

Scrapy components that use request fingerprints may impose additional restrictions on the format of the fingerprints that your request fingerprinter generates.

The following built-in Scrapy components have such restrictions:

  • scrapy.extensions.httpcache.FilesystemCacheStorage (default value of HTTPCACHE_STORAGE)

    Request fingerprints must be at least 1 byte long.

    Path and filename length limits of the file system of HTTPCACHE_DIR also apply. Inside HTTPCACHE_DIR, the following directory structure is created:

    • Spider.name

      • first byte of a request fingerprint as hexadecimal

        • fingerprint as hexadecimal

          • filenames up to 16 characters long

    For example, if a request fingerprint is made of 20 bytes (default), HTTPCACHE_DIR is '/home/user/project/.scrapy/httpcache', and the name of your spider is 'my_spider' your file system must support a file path like:

    /home/user/project/.scrapy/httpcache/my_spider/01/0123456789abcdef0123456789abcdef01234567/response_headers
    
  • scrapy.extensions.httpcache.DbmCacheStorage

    The underlying DBM implementation must support keys as long as twice the number of bytes of a request fingerprint, plus 5. For example, if a request fingerprint is made of 20 bytes (default), 45-character-long keys must be supported.

Request.meta special keys

The Request.meta attribute can contain any arbitrary data, but there are some special keys recognized by Scrapy and its built-in extensions.

Those are:

bindaddress

The default local outgoing address for download-handler connections.

This meta value can be either:

  • a host address as a string (e.g. "127.0.0.2"), in which case the local port is chosen automatically, or

  • a (host, port) tuple (e.g. ("127.0.0.2", 50000)) to bind to both a specific local interface and a specific local port.

For example:

Request(
    "https://example.org",
    meta={"bindaddress": "127.0.0.2"},
)
Request(
    "https://example.org",
    meta={"bindaddress": ("127.0.0.2", 50000)},
)

If not set, built-in HTTP download handlers use the value of DOWNLOAD_BIND_ADDRESS as the default bind address. Set the bindaddress request meta key to override it for a specific request.

This meta key is not supported by HttpxDownloadHandler, but the DOWNLOAD_BIND_ADDRESS is supported by it.

download_timeout

The amount of time (in secs) that the downloader will wait before timing out. See also: DOWNLOAD_TIMEOUT.

download_latency

The amount of time spent to fetch the response, since the request has been started, i.e. HTTP message sent over the network. This meta key only becomes available when the response has been downloaded. While most other meta keys are used to control Scrapy behavior, this one is supposed to be read-only.

download_fail_on_dataloss

Whether or not to fail on broken responses. See: DOWNLOAD_FAIL_ON_DATALOSS.

max_retry_times

The meta key is used set retry times per request. When initialized, the max_retry_times meta key takes higher precedence over the RETRY_TIMES setting.

Stopping the download of a Response

Raising a StopDownload exception from a handler for the bytes_received or headers_received signals will stop the download of a given response. See the following example:

import scrapy


class StopSpider(scrapy.Spider):
    name = "stop"
    start_urls = ["https://docs.scrapy.org/en/latest/"]

    @classmethod
    def from_crawler(cls, crawler):
        spider = super().from_crawler(crawler)
        crawler.signals.connect(
            spider.on_bytes_received, signal=scrapy.signals.bytes_received
        )
        return spider

    def parse(self, response):
        # 'last_chars' show that the full response was not downloaded
        yield {"len": len(response.text), "last_chars": response.text[-40:]}

    def on_bytes_received(self, data, request, spider):
        raise scrapy.exceptions.StopDownload(fail=False)

which produces the following output:

2020-05-19 17:26:12 [scrapy.core.engine] INFO: Spider opened
2020-05-19 17:26:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-05-19 17:26:13 [scrapy.core.downloader.handlers.http11] DEBUG: Download stopped for <GET https://docs.scrapy.org/en/latest/> from signal handler StopSpider.on_bytes_received
2020-05-19 17:26:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://docs.scrapy.org/en/latest/> (referer: None) ['download_stopped']
2020-05-19 17:26:13 [scrapy.core.scraper] DEBUG: Scraped from <200 https://docs.scrapy.org/en/latest/>
{'len': 279, 'last_chars': 'dth, initial-scale=1.0">\n  \n  <title>Scr'}
2020-05-19 17:26:13 [scrapy.core.engine] INFO: Closing spider (finished)

By default, resulting responses are handled by their corresponding errbacks. To call their callback instead, like in this example, pass fail=False to the StopDownload exception.

Request subclasses

Here is the list of built-in Request subclasses. You can also subclass it to implement your own custom functionality.

JsonRequest

The JsonRequest class extends the base Request class with functionality for dealing with JSON requests.

class scrapy.http.JsonRequest(url[, ... data, dumps_kwargs])

The JsonRequest class adds two new keyword parameters to the __init__() method. The remaining arguments are the same as for the Request class and are not documented here.

Using the JsonRequest will set the Content-Type header to application/json and Accept header to application/json, text/javascript, */*; q=0.01

Parameters:
  • data (object) – is any JSON serializable object that needs to be JSON encoded and assigned to body. If the body argument is provided this parameter will be ignored. If the body argument is not provided and the data argument is provided the method will be set to 'POST' automatically.

  • dumps_kwargs (dict) – Parameters that will be passed to underlying json.dumps() method which is used to serialize data into JSON format.

JsonRequest usage example

Sending a JSON POST request with a JSON payload:

data = {
    "name1": "value1",
    "name2": "value2",
}
yield JsonRequest(url="http://www.example.com/post/action", data=data)

Response objects

Response subclasses

Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality.

TextResponse objects

class scrapy.http.TextResponse(url[, encoding[, ...]])

TextResponse objects adds encoding capabilities to the base Response class, which is meant to be used only for binary data, such as images, sounds or any media file.

TextResponse objects support a new __init__() method argument, in addition to the base Response objects. The remaining functionality is the same as for the Response class and is not documented here.

Parameters:

encoding (str) – is a string which contains the encoding to use for this response. If you create a TextResponse object with a string as body, it will be converted to bytes encoded using this encoding. If encoding is None (default), the encoding will be looked up in the response headers and body instead.

TextResponse objects support the following attributes in addition to the standard Response ones:

text

Response body, as a string.

The same as response.body.decode(response.encoding), but the result is cached after the first call, so you can access response.text multiple times without extra overhead.

Note

str(response.body) is not a correct way to convert the response body into a string:

>>> str(b"body")
"b'body'"
encoding

A string with the encoding of this response. The encoding is resolved by trying the following mechanisms, in order:

  1. the encoding passed in the __init__() method encoding argument

  2. the encoding declared in the Content-Type HTTP header. If this encoding is not valid (i.e. unknown), it is ignored and the next resolution mechanism is tried.

  3. the encoding declared in the response body. The TextResponse class doesn’t provide any special functionality for this. However, the HtmlResponse and XmlResponse classes do.

  4. the encoding inferred by looking at the response body. This is the more fragile method but also the last one tried.

selector

A Selector instance using the response as target. The selector is lazily instantiated on first access.

TextResponse objects support the following methods in addition to the standard Response ones:

jmespath(query)

A shortcut to TextResponse.selector.jmespath(query):

response.jmespath('object.[*]')
xpath(query)

A shortcut to TextResponse.selector.xpath(query):

response.xpath('//p')
css(query)

A shortcut to TextResponse.selector.css(query):

response.css('p')
urljoin(url)

Constructs an absolute url by combining the Response’s base url with a possible relative url. The base url shall be extracted from the <base> tag, or just Response.url if there is no such tag.

HtmlResponse objects

class scrapy.http.HtmlResponse(url[, ...])

The HtmlResponse class is a subclass of TextResponse which adds encoding auto-discovering support by looking into the HTML meta http-equiv attribute. See TextResponse.encoding.

XmlResponse objects

class scrapy.http.XmlResponse(url[, ...])

The XmlResponse class is a subclass of TextResponse which adds encoding auto-discovering support by looking into the XML declaration line. See TextResponse.encoding.

JsonResponse objects

class scrapy.http.JsonResponse(url[, ...])

The JsonResponse class is a subclass of TextResponse that is used when the response has a JSON MIME type in its Content-Type header.