2016-05-08 5 views
0

このスパイダーを実行すると、スクラップでは、スクラップされているページが'http://192.168.59.103:8050/render.html'start_requests "meta"パラメーターで定義されているスプラッシュレンダーエンドポイント)であることがわかります。もちろんこれは私が渡したいURLですstart_urls、私が掻きたいものではありません。問題を推測すると、start_urlsからstart_requestsにURLを渡して解析する方法がありますが、正確な問題を特定することはできません。スクリーニングバイパスstart_urls

もここで事前に私のsettings file

おかげです。

# -*- coding: utf-8 -*- 
#scrapy crawl ia_checkr -o IA_OUT.csv -t csv 

import scrapy 
from scrapy.http import Request 
from scrapy.selector import Selector 
from scrapy.spiders import CrawlSpider, Rule 

from ia_check.items import Check_Item 

from datetime import datetime 
import ia_check 

class CheckSpider(CrawlSpider): 
    name = "ia_check" 
    handle_httpstatus_list = [404,429,503] 

    start_urls = [ 
    "http://rads.stackoverflow.com/amzn/click/B00PRH5UJW", 
    "http://rads.stackoverflow.com/amzn/click/B00KFITEV8", 
    "http://rads.stackoverflow.com/amzn/click/B00J0T73XO", 
    "http://rads.stackoverflow.com/amzn/click/B00O65Z0RS", 
    "http://rads.stackoverflow.com/amzn/click/B00N3DDDRI" 
    ] 

    def start_requests(self): 
     for url in self.start_urls: 
      yield scrapy.Request(url, self.parse, meta={ 
       'splash': { 
        'endpoint': 'render.html', 
        'args': {'wait': 1} 
       } 
      }) 

    def parse(self, response): 
     ResultsDict = Check_Item() 
     Select = Selector(response).xpath 

     ResultsDict['title'] = Select(".//*[@class='h1']/text()|.//*[@id='btAsinTitle']/text()").extract() 
     ResultsDict['application_url'] = response.url 
     return ResultsDict 

答えて

1

は、私はあなたが最新の元のリモートホストへのURLを「修正」します便利なscrapy_splash.SplashRequestユーティリティあります

scrapyjsと呼ばれていたもの)scrapy-splash pluginなくスプラッシュエンドポイントにアップグレードすることを示唆しています。

これはあなたに似た例のクモです:あなたはそれを取得し、コンソールログを確認し、特にURL

import scrapy 
from scrapy_splash import SplashRequest 


class CheckSpider(scrapy.Spider): 
    name = "scrapy-splash-example" 
    handle_httpstatus_list = [404,429,503] 

    start_urls = [ 
     "http://rads.stackoverflow.com/amzn/click/B00PRH5UJW", 
     "http://rads.stackoverflow.com/amzn/click/B00KFITEV8", 
     "http://rads.stackoverflow.com/amzn/click/B00J0T73XO", 
     "http://rads.stackoverflow.com/amzn/click/B00O65Z0RS", 
     "http://rads.stackoverflow.com/amzn/click/B00N3DDDRI" 
    ] 

    def start_requests(self): 
     for url in self.start_urls: 
      yield SplashRequest(url, 
           callback=self.parse, 
           args={ 
            'wait': 1, 
           }) 

    def parse(self, response): 
     self.logger.debug("Response: status=%d; url=%s" % (response.status, response.url)) 

settings.py

# -*- coding: utf-8 -*- 

# Scrapy settings for splashtst project 
# 
# For simplicity, this file contains only settings considered important or 
# commonly used. You can find more settings consulting the documentation: 
# 
#  http://doc.scrapy.org/en/latest/topics/settings.html 
#  http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html 
#  http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 

BOT_NAME = 'splashtst' 

SPIDER_MODULES = ['splashtst.spiders'] 
NEWSPIDER_MODULE = 'splashtst.spiders' 

# Splash stuff 
SPLASH_URL = 'http://localhost:8050' 
DOWNLOADER_MIDDLEWARES = { 
    'scrapy_splash.SplashCookiesMiddleware': 723, 
    'scrapy_splash.SplashMiddleware': 725, 
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810, 
} 

SPIDER_MIDDLEWARES = { 
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100, 
} 
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter' 
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage' 

$ scrapy crawl scrapy-splash-example 
2016-05-09 12:46:05 [scrapy] INFO: Scrapy 1.0.6 started (bot: splashtst) 
2016-05-09 12:46:05 [scrapy] INFO: Optional features available: ssl, http11 
2016-05-09 12:46:05 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'splashtst.spiders', 'SPIDER_MODULES': ['splashtst.spiders'], 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'BOT_NAME': 'splashtst'} 
2016-05-09 12:46:05 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState 
2016-05-09 12:46:05 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, SplashCookiesMiddleware, SplashMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2016-05-09 12:46:05 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, SplashDeduplicateArgsMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2016-05-09 12:46:05 [scrapy] INFO: Enabled item pipelines: 
2016-05-09 12:46:05 [scrapy] INFO: Spider opened 
2016-05-09 12:46:05 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-05-09 12:46:05 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2016-05-09 12:46:07 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00O65Z0RS via http://localhost:8050/render.html> (referer: None) 
2016-05-09 12:46:07 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00O65Z0RS 
2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00KFITEV8 via http://localhost:8050/render.html> (referer: None) 
2016-05-09 12:46:12 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00KFITEV8 
2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00PRH5UJW via http://localhost:8050/render.html> (referer: None) 
2016-05-09 12:46:13 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00PRH5UJW 
2016-05-09 12:46:16 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00N3DDDRI via http://localhost:8050/render.html> (referer: None) 
2016-05-09 12:46:17 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00N3DDDRI 
2016-05-09 12:46:18 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00J0T73XO via http://localhost:8050/render.html> (referer: None) 
2016-05-09 12:46:18 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00J0T73XO 
2016-05-09 12:46:18 [scrapy] INFO: Closing spider (finished) 
2016-05-09 12:46:18 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 2690, 
'downloader/request_count': 5, 
'downloader/request_method_count/POST': 5, 
'downloader/response_bytes': 1794947, 
'downloader/response_count': 5, 
'downloader/response_status_count/200': 5, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 5, 9, 10, 46, 18, 631501), 
'log_count/DEBUG': 11, 
'log_count/INFO': 7, 
'response_received_count': 5, 
'scheduler/dequeued': 10, 
'scheduler/dequeued/memory': 10, 
'scheduler/enqueued': 10, 
'scheduler/enqueued/memory': 10, 
'splash/render.html/request_count': 5, 
'splash/render.html/response_count/200': 5, 
'start_time': datetime.datetime(2016, 5, 9, 10, 46, 5, 368693)} 
2016-05-09 12:46:18 [scrapy] INFO: Spider closed (finished) 
+0

感謝を君は。これは大いに役立ちますが、私はまだ異なる出力を得ています。回答にsettings.pyを含めることはできますか?私の出力の2行のサンプル: '[1]:2016-05-09 12:34:24 [scrapy] DEBUG:Crawled(429)(referer:None) [2]:2016-05-09 12:34:25 [ia_check] DEBUG:レスポンス:status = 429 ; url = http://192.168.59.103:8050/render.html' –

+0

私は 'settings.py'を追加しました。これは本当にscrapy-splashで標準です。私がテストしていないのは、200以外のステータスコードです –

+0

設定ファイルと[scrapinghub blog/splash tutorial](https://blog.scrapinghub.com/2015/03)で提供されるものとの間にはいくつかの違いがありました/ 02/handling-javascript-in-splash-with-splash /)(これは私が今理解しているものは時代遅れです)、残念ながらscrapyjsとscrapy-splashの違いは十分に文書化されていません。 –

関連する問題