2017-09-15 9 views
1

私は、Wikipediaのページの参照セクションからURLを掻き取るプログラムを作成しようとしていますが、そのタグ/クラスを分離することに問題があります。Wikipediaの参考文献のURLを掻き集めるセクション

## Import required packages ## 
from urllib.request import urlopen 
from urllib.error import HTTPError 
from bs4 import BeautifulSoup 
import re 

selectWikiPage = input(print("Please enter the Wikipedia page you wish to scrape from")) 
isWikiFound = re.findall(selectWikiPage, 'wikipedia') 
if "wikipedia" in selectWikiPage: 
    print("Input accepted") 
    html = urlopen(selectWikiPage) 
    bsObj = BeautifulSoup(html, "lxml") 
    findReferences = bsObj.findAll("#References") 
    for wikiReferences in findReferences: 
     print(wikiReferences.get_text()) 

else: 
    print("Error: Please enter a valid Wikipedia URL") 

これは私が少しリクエスト・ライブラリを使用するようにコードを変更するプログラム

Please enter the Wikipedia page you wish to scrape from 
Nonehttp://wikipedia.org/wiki/randomness 
Input accepted 
+1

はあなたのfindAllは何も返しません。まず参照セクションを選択し、そのセクション内で '' 'bsObj.find(" ol "、{" class ":" references "})を検索します。findAll( 'a')' '' – Lexxxxx

答えて

0

の出力です。

あなたはwikiページで使用されるテキストの源であるリンクのみを検索したい場合は、私はテストケースとして「https://en.wikipedia.org/wiki/Randomness

をこのリンクを使用:

import requests 
from bs4 import BeautifulSoup 

session = requests.Session()  
selectWikiPage = input(print("Please enter the Wikipedia page you wish to scrape from")) 

if "wikipedia" in selectWikiPage: 
    html = session.post(selectWikiPage) 
    bsObj = BeautifulSoup(html.text, "html.parser") 
    findReferences = bsObj.findAll('span', {'class':'reference-text'}) 
    href = BeautifulSoup(str(findReferences), "html.parser") 
    references = href.findAll('a', href=True) 
    links = [a["href"] for a in soup.find_all("a", href=True)]  
    print i in links: 
else: 
    print("Error: Please enter a valid Wikipedia URL") 

出力:

Please enter the Wikipedia page you wish to scrape from 
Nonehttps://en.wikipedia.org/wiki/Randomness 
Link: /wiki/Oxford_English_Dictionary 
Link: http://www.people.fas.harvard.edu/~junliu/Workshops/workshop2007/ 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-19-512332-8 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-674-01517-7 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-387-98844-0 
Link: http://www.nature.com/nature/journal/v446/n7138/abs/nature05677.html 
Link: /w/index.php?title=Bell%27s_aspect_experiment&action=edit&redlink=1 
Link: /wiki/Nature_(journal) 
Link: /wiki/John_Gribbin 
Link: https://www.academia.edu/11720588/No_entailing_laws_but_enablement_in_the_evolution_of_the_biosphere 
Link: /wiki/International_Standard_Book_Number 
Link: /wiki/Special:BookSources/9781450311786 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1145%2F2330784.2330946 
Link: https://www.academia.edu/11720575/Extended_criticality_phase_spaces_and_enablement_in_biology 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1016%2Fj.chaos.2013.03.008 
Link: /wiki/PubMed_Identifier 
Link: //www.ncbi.nlm.nih.gov/pubmed/7059501 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1111%2Fj.1365-2133.1982.tb00897.x 
Link: http://webpages.uncc.edu/yonwang/papers/thesis.pdf 
Link: http://www.lbl.gov/Science-Articles/Archive/pi-random.html 
Link: http://www.ciphersbyritter.com/RES/RANDTEST.HTM 
Link: http://dx.doi.org/10.1038/nature09008 
Link: https://www.nytimes.com/2008/06/08/books/review/Johnson-G-t.html?_r=1 

あなたは参照ページ内のすべてのURLリンク盗んしたい場合:

を3210
import requests 
from bs4 import BeautifulSoup 

session = requests.Session() 
selectWikiPage = input(print("Please enter the Wikipedia page you wish to scrape from")) 

if "wikipedia" in selectWikiPage: 
    html = session.post(selectWikiPage) 
    bsObj = BeautifulSoup(html.text, "html.parser") 
    findReferences = bsObj.find('ol', {'class': 'references'}) 
    href = BeautifulSoup(str(findReferences), "html.parser") 
    links = [a["href"] for a in href.find_all("a", href=True)] 
    for link in links: 
     print("Link: " + link) 
else: 
    print("Error: Please enter a valid Wikipedia URL") 

出力:

Please enter the Wikipedia page you wish to scrape from 
Nonehttps://en.wikipedia.org/wiki/Randomness 
Link: #cite_ref-1 
Link: /wiki/Oxford_English_Dictionary 
Link: #cite_ref-2 
Link: http://www.people.fas.harvard.edu/~junliu/Workshops/workshop2007/ 
Link: #cite_ref-3 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-19-512332-8 
Link: #cite_ref-4 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-674-01517-7 
Link: #cite_ref-5 
Link: /wiki/International_Standard_Book_Number_(identifier) 
Link: /wiki/Special:BookSources/0-387-98844-0 
Link: #cite_ref-6 
Link: http://www.nature.com/nature/journal/v446/n7138/abs/nature05677.html 
Link: /w/index.php?title=Bell%27s_aspect_experiment&action=edit&redlink=1 
Link: /wiki/Nature_(journal) 
Link: #cite_ref-7 
Link: /wiki/John_Gribbin 
Link: #cite_ref-8 
Link: https://www.academia.edu/11720588/No_entailing_laws_but_enablement_in_the_evolution_of_the_biosphere 
Link: /wiki/International_Standard_Book_Number 
Link: /wiki/Special:BookSources/9781450311786 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1145%2F2330784.2330946 
Link: #cite_ref-9 
Link: https://www.academia.edu/11720575/Extended_criticality_phase_spaces_and_enablement_in_biology 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1016%2Fj.chaos.2013.03.008 
Link: #cite_ref-10 
Link: /wiki/PubMed_Identifier 
Link: //www.ncbi.nlm.nih.gov/pubmed/7059501 
Link: /wiki/Digital_object_identifier 
Link: //doi.org/10.1111%2Fj.1365-2133.1982.tb00897.x 
Link: #cite_ref-11 
Link: http://webpages.uncc.edu/yonwang/papers/thesis.pdf 
Link: #cite_ref-12 
Link: http://www.lbl.gov/Science-Articles/Archive/pi-random.html 
Link: #cite_ref-13 
Link: #cite_ref-14 
Link: #cite_ref-15 
Link: http://www.ciphersbyritter.com/RES/RANDTEST.HTM 
Link: #cite_ref-16 
Link: http://dx.doi.org/10.1038/nature09008 
Link: #cite_ref-NYOdds_17-0 
Link: #cite_ref-NYOdds_17-1 
Link: https://www.nytimes.com/2008/06/08/books/review/Johnson-G-t.html?_r=1 
+0

OPがすべてのハイパーリンクを望んでいる場合、soup.find_all( "a"、href = True)]の 'links [[a href"] 'で十分であるはずです。 – Tony

+0

こんにちはトニー、あなたの返事をありがとう。私はOP質問の範囲に合うように私の答えを編集しました。 – Ali