Source: python-w3lib
Maintainer: Debian Python Team <team+python@tracker.debian.org>
Uploaders:
 Ignace Mouzannar <mouzannar@gmail.com>,
 Andrey Rakhmatullin <wrar@debian.org>
Section: python
Priority: optional
Build-Depends:
 debhelper-compat (= 13),
 dh-sequence-python3,
 dh-sequence-sphinxdoc <!nodoc>,
 pybuild-plugin-pyproject,
 python-pytest-doc <!nodoc>,
 python-scrapy-doc <!nodoc>,
 python3-all,
 python3-doc <!nodoc>,
 python3-pytest <!nocheck>,
 python3-setuptools,
 python3-sphinx <!nodoc>,
 python3-sphinx-notfound-page <!nodoc>,
 python3-sphinx-rtd-theme <!nodoc>,
 tox <!nodoc>,
Standards-Version: 4.7.2
Vcs-Browser: https://salsa.debian.org/python-team/packages/python-w3lib
Vcs-Git: https://salsa.debian.org/python-team/packages/python-w3lib.git
Homepage: https://github.com/scrapy/w3lib
Testsuite: autopkgtest-pkg-pybuild

Package: python3-w3lib
Architecture: all
Depends:
 ${misc:Depends},
 ${python3:Depends},
Description: Collection of web-related functions
 Python module with simple, reusable functions to work with URLs, HTML,
 forms, and HTTP, that aren’t found in the Python standard library.
 .
 This module is used to, for example:
  - remove comments, or tags from HTML snippets
  - extract base url from HTML snippets
  - translate entities on HTML strings
  - encoding mulitpart/form-data
  - convert raw HTTP headers to dicts and vice-versa
  - construct HTTP auth header
  - RFC-compliant url joining
  - sanitize urls (like browsers do)
  - extract arguments from urls
 .
 The code of w3lib was originally part of the Scrapy framework but was later
 stripped out of Scrapy, with the aim of make it more reusable and to provide
 a useful library of web functions without depending on Scrapy.

Package: python-w3lib-doc
Section: doc
Architecture: all
Depends: ${misc:Depends},
         ${sphinxdoc:Depends},
Built-Using: ${sphinxdoc:Built-Using}
Suggests: python3-w3lib
Description: Collection of web-related functions (documentation package)
 Python module with simple, reusable functions to work with URLs, HTML,
 forms, and HTTP, that aren’t found in the Python standard library.
 .
 This module is used to, for example:
  - remove comments, or tags from HTML snippets
  - extract base url from HTML snippets
  - translate entities on HTML strings
  - encoding mulitpart/form-data
  - convert raw HTTP headers to dicts and vice-versa
  - construct HTTP auth header
  - RFC-compliant url joining
  - sanitize urls (like browsers do)
  - extract arguments from urls
 .
 The code of w3lib was originally part of the Scrapy framework but was later
 stripped out of Scrapy, with the aim of make it more reusable and to provide
 a useful library of web functions without depending on Scrapy.
 .
 This package contains documentation for python3-w3lib.
