2017-01-06 5 views
0
scrapy shell http://www.zara.com/us 

が正しい200コードScrapyシェルが動作しますが、実際のスクリプトは、404エラー

2017-01-05 18:34:20 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: zara) 
2017-01-05 18:34:20 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'zara.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['zara.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'zara', 'LOGSTATS_INTERVAL': 0, 'USER_AGENT': 'zara (+http://www.yourdomain.com)'} 
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats', 
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-01-05 18:34:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 
2017-01-05 18:34:20 [scrapy.core.engine] INFO: Spider opened 
2017-01-05 18:34:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.zara.com/robots.txt> (referer: None) ['cached'] 
2017-01-05 18:34:20 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.zara.com/us/> from <GET http://www.zara.com/us> 
2017-01-05 18:34:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.zara.com/us/> (referer: None) ['cached'] 

を返します。しかし、私はwww.zara.com/usを入力しようとすると、私の実際のPYスクリプトが404エラーが発生します。返します私はwwww.zara.comを使用している場合、ページは200を返しますが、私は国の特定のサイトでそれを実行しようとするとき、それはターミナルで404 ...

class ZaraSpider(scrapy.Spider): 

    name = "zara-us" 
    allowed_domain = ['www.zara.com/us'] 
    start_urls = [ 
     "http://www.zara.com/us" 
    ] 
    handle_httpstatus_list = [404] 

    # navigating main page 
    def parse(self, response): 

     # get 1st 2 category listing in navigation sidebar 
     categories = response.xpath('//*[@id="menu"]/ul/li') 
     collections = categories[0].xpath('a//text()').extract() 
     yield ProductItem(collection=collections[0]) 

型付け返します scrapy crawl zara-us

2017-01-05 18:45:24 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: zara) 
2017-01-05 18:45:24 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'zara.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['zara.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'zara', 'USER_AGENT': 'zara (+http://www.yourdomain.com)'} 
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats', 
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-01-05 18:45:24 [scrapy.core.engine] INFO: Spider opened 
2017-01-05 18:45:24 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-01-05 18:45:24 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/robots.txt> (referer: None) ['cached'] 
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/us> (referer: None) ['cached'] 
2017-01-05 18:45:25 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.zara.com/us> (referer: None) 
` 

答えて

0

デフォルトでは、すべてのプロジェクトの処理でROBOTS_TXT_OBEYがTrueになります。これは、スパイダーがウェブサイトrobots.txtファイルをスクラップする前に、スクラップが許可されていないことを示します。

これを無効にするには、settings.pyファイルから設定ROBOTS_TXT_OBEYを削除するだけです。

もっと見るhere

関連する問題