Item Pipeline¶
After an item has been scraped by a spider it is sent to the Item Pipeline which process it through several components that are executed sequentially.
Item pipeline are usually implemented on each project. Typical usage for item pipelines are:
- HTML cleansing
- validation
- persistence (storing the scraped item)
Writing your own item pipeline¶
Writing your own item pipeline is easy. Each item pipeline component is a single Python class that must define the following method:
-
scrapy.contrib.pipeline.
process_item
(spider, item)¶ Parameters: - spider (
BaseSpider
object) – the spider which scraped the item - item (
Item
object) – the item scraped
- spider (
This method is called for every item pipeline component and must either return
a Item
(or any descendant class) object or raise a
DropItem
exception. Dropped items are no longer
processed by further pipeline components.
Item pipeline example¶
Let’s take a look at following hypothetic pipeline that adjusts the price
attribute for those items that do not include VAT (price_excludes_vat
attribute), and drops those items which don’t contain a price:
from scrapy.core.exceptions import DropItem
class PricePipeline(object):
vat_factor = 1.15
def process_item(self, spider, item):
if item['price']:
if item['price_excludes_vat']:
item['price'] = item['price'] * self.vat_factor
return item
else:
raise DropItem("Missing price in %s" % item)
Activating a Item Pipeline component¶
To activate an Item Pipeline component you must add its class to the
ITEM_PIPELINES
list, like in the following example:
ITEM_PIPELINES = [
'myproject.pipeline.PricePipeline',
]
Item pipeline example with resources per spider¶
Sometimes you need to keep resources about the items processed grouped per spider, and delete those resource when a spider finish.
An example is a filter that looks for duplicate items, and drops those items that were already processed. Let say that our items has an unique id, but our spider returns multiples items with the same id:
from scrapy.xlib.pydispatch import dispatcher
from scrapy.core import signals
from scrapy.core.exceptions import DropItem
class DuplicatesPipeline(object):
def __init__(self):
self.duplicates = {}
dispatcher.connect(self.spider_opened, signals.spider_opened)
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_opened(self, spider):
self.duplicates[spider] = set()
def spider_closed(self, spider):
del self.duplicates[spider]
def process_item(self, spider, item):
if item['id'] in self.duplicates[spider]:
raise DropItem("Duplicate item found: %s" % item)
else:
self.duplicates[spider].add(item['id'])
return item
Built-in Item Pipelines reference¶
Here is a list of item pipelines bundled with Scrapy.
File Export Pipeline¶
-
class
scrapy.contrib.pipeline.fileexport.
FileExportPipeline
¶
This pipeline exports all scraped items into a file, using different formats.
It is simple but convenient wrapper to use Item Exporters as Item Pipelines. If you need more custom/advanced functionality you can write your own pipeline or subclass the Item Exporters .
It supports the following settings:
EXPORT_FORMAT
(mandatory)EXPORT_FILE
(mandatory)EXPORT_FIELDS
EXPORT_EMPTY
EXPORT_ENCODING
If any mandatory setting is not set, this pipeline will be automatically disabled.
File Export Pipeline examples¶
Here are some usage examples of the File Export Pipeline.
To export all scraped items into a XML file:
EXPORT_FORMAT = 'xml'
EXPORT_FILE = 'scraped_items.xml'
To export all scraped items into a CSV file (with all fields in headers line):
EXPORT_FORMAT = 'csv'
EXPORT_FILE = 'scraped_items.csv'
To export all scraped items into a CSV file (with specific fields in headers line):
EXPORT_FORMAT = 'csv_headers'
EXPORT_FILE = 'scraped_items_with_headers.csv'
EXPORT_FIELDS = ['name', 'price', 'description']
File Export Pipeline settings¶
EXPORT_FORMAT¶
The format to use for exporting. Here is a list of all available formats. Click on the respective Item Exporter to get more info.
xml
: uses aXmlItemExporter
csv
: uses aCsvItemExporter
csv_headers
: uses aCsvItemExporter
with a the column headers on the first line. This format requires you to specify the fields to export using theEXPORT_FIELDS
setting.json
: uses aJsonLinesItemExporter
pickle
: uses aPickleItemExporter
pprint
: uses aPprintItemExporter
This setting is mandatory in order to use the File Export Pipeline.
EXPORT_FILE¶
The name of the file where the items will be exported. This setting is mandatory in order to use the File Export Pipeline.
EXPORT_FIELDS¶
Default: None
The name of the item fields that will be exported. This will be use for the
fields_to_export
Item Exporter attribute. If
None
, all fields will be exported.
EXPORT_EMPTY¶
Default: False
Whether to export empty (non populated) fields. This will be used for the
export_empty_fields
Item Exporter attribute.