how to parse a sitemap.xml file using scrapy's XmlFeedSpider?
I am trying to parse sitemap.xml
files using scrapy, the sitemap files are like the following one with just much more url
nodes.
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:video="http://www.sitemaps.org/schemas/sitemap-video/1.1">
<url>
<loc>
http://www.site.com/page.html
</loc>
<video:video>
<video:thumbnail_loc>
http://www.site.com/thumb.jpg
</video:thumbnail_loc>
<video:content_loc>http://www.example.com/video123.flv</video:content_loc>
<video:player_loc allow_embed="yes" autoplay="ap=1">
http://www.example.com/videoplayer.swf?video=123
</video:player_loc>
<video:title>here is the page title</video:title>
<video:description>and an awesome description</video:description>
<video:duration>302</video:duration>
<video:publication_date>2011-02-24T02:03:43+02:00</video:publication_date>
<video:tag>w00t</video:tag>
<video:tag>awesome</video:tag>
<video:tag>omgwtfbbq</video:tag>
<video:tag>kthxby</video:tag>
</video:video>
</url>
</urlset>
I looked at the related scrapy's documentation, and i wrote the following snippet to see if i was doing the right way (and it seems i don't ^^):
class SitemapSpider(XMLFeedSpider):
name = "sitemap"
namespaces = [
('', 'http://www.sitemaps.org/schemas/sitemap/0.9'),
('video', 'http://www.sitemaps.org/schemas/sitemap-video/1.1'),
]
start_urls = ["http://example.com/sitemap.xml"]
itertag = 'url'
def parse_node(self, response, node):
print "Parsing: %s" % str(node)
But when i run the spider, i get this error:
File "/.../python2.7/site-packages/scrapy/utils/iterators.py", line 32, in xmliter
yield XmlXPathSelector(text=nodetext).select('//' + nodename)[0]
exceptions.IndexError: list index out of range
I think i'm not defining the "default" namespace (http://www.sitemaps.org/schemas/sitemap/0.9) properly, but i can't find how to do this.
What's the correct way to iterate over the url
nodes and then be able to extract the needed infos from its childs?
ANSWER:
Unfortunately, i wasn't able to use the XMLFeedSpider
(which is supposed to be the way to parse XML with scrapy
), but thanks to simplebias' answer, i have been able to figure a way to achieve this "the old-school way". I came up with the following code (which works, this time!):
class SitemapSpider(BaseSpider):
name = 'sitemap'
namespaces = {
'sitemap': 'http://www.sitemaps.org/schemas/sitemap/0.9',
'video': 'http://www.sitemaps.org/schemas/sitemap-video/1.1',
}
def parse(self, response):
xxs = XmlXPathSelector(response)
for namespace, schema in self.namespaces.iteritems():
xxs.register_namespace(namespace, schema)
for urlnode in xxs.select('//sitemap:url'):
extract_datas_here()
Scrapy uses lxml / libxml2 under the hood, eventually invoking the node.xpath()
method to perform the selection. Any elements in your xpath expression which are namespaced must be prefixed, and you must pass a mapping to tell the selector which namespace each prefix resolves to.
Here is an example to illustrate how to map prefixes to namespaces when using the node.xpath()
method:
doc = '<root xmlns="chaos"><bar /></root>'
tree = lxml.etree.fromstring(doc)
tree.xpath('//bar')
[]
tree.xpath('//x:bar', namespaces={'x': 'chaos'})
[<Element {chaos}bar at 7fa40f9c50a8>]
Without having used this scrapy XMLFeedSpider class, I'm guessing your namespace map and itertag need to follow the same scheme:
class SitemapSpider(XMLFeedSpider):
namespaces = [
('sm', 'http://www.sitemaps.org/schemas/sitemap/0.9'),
]
itertag = 'sm:url'
I found that the difference between hxs and xxs were helpful. I found it difficult to locate the xxs object. I was trying to use this
x = XmlXPathSelector(response)
When these worked far better for what I needed.
hxs.select('//p/text()').extract()
or
xxs.select('//title/text()').extract()
链接地址: http://www.djcxy.com/p/65708.html
上一篇: xslt中的XML名称空间处理