Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/356.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 用刮刀刮桌子_Python_Html Table_Scrapy - Fatal编程技术网

Python 用刮刀刮桌子

Python 用刮刀刮桌子,python,html-table,scrapy,Python,Html Table,Scrapy,为这篇冗长的文章道歉- 我有一张桌子,我正试图用scrapy挖进去,但我不太明白如何挖得足够深 这是表格: <table class="detail-table" border="0" cellspacing="0"> <tbody> <tr id="trAnimalID"> ... </tr> <tr id="trSpecies"> ... </tr> <tr id="trBreed">

为这篇冗长的文章道歉-

我有一张桌子,我正试图用scrapy挖进去,但我不太明白如何挖得足够深

这是表格:

<table class="detail-table" border="0" cellspacing="0">
 <tbody>
 <tr id="trAnimalID">
  ...
 </tr>
 <tr id="trSpecies">
  ...
 </tr>
 <tr id="trBreed">
  ...
 </tr>
 <tr id="trAge">
  ...
 <tr id="trSex">
  ...
 </tr>
 <tr id="trSize">
  ...
 </tr>
 <tr id="trColor">
  ...
 </tr>
 <tr id="trDeclawed">
  ...
 </tr>
 <tr id="trHousetrained">
  ...
 </tr>
 <tr id="trLocation">
  ...
 </tr>
 <tr id="trIntakeDate">
  <td class="detail-label" align="right">
   <b>Intake Date</b>
  </td>
  <td class="detail-value">
   <span id="lblIntakeDate">3/31/2020</span>&nbsp;
  </td>
 </tr>
 <tr id="trStage">
  <td class="detail-label" align="right">
   <b>Stage</b>
  </td>
  <td class="detail-value">
   <span id="lblStage">Reserved</span>
  </td>
 </tr>
 </tbody></table>
我得到的是:

'<tr id="trIntakeDate">\r\n\t
  <td class="detail-label" align="right">\r\n
   <b>Intake Date</b>\r\n
  </td>\r\n\t
  <td class="detail-value">\r\n
   <span id="lblIntakeDate">3/31/2020</span>\xa0\r\n
  </td>\r\n
</tr>'
”\r\n\t
\r\n
录取日期\r\n
\r\n\t
\r\n
2020年3月31日\r\n
\r\n
'
我不太明白如何得到lblIntakeDate的值。我只需要2020年3月31日。此外,我想将其作为lambda运行,但我不太明白如何让execute函数像使用命令行一样转储json文件。有什么想法吗?

试试看:

//table[@class='detail-table']/tbody//tr/td/span[@id='lblIntakeDate']/text()
去 请删除多余字符,如

试试:

from urllib.request import urlopen

url = ''
html = urlopen(url)
bs = BeautifulSoup(html.read(), 'html.parser')

for i in bs.find_all('a'):
    print(i.get_text())

这把我拉向了正确的方向。您发布的查询并没有收回任何内容,但我插入了text(),它成功了!这就是我能够使用的:
text=response.xpath('/*[@id=“lblIntakeDate”]/text()).extract()
感谢您的帮助和链接该查询编辑器@非常欢迎你,我很高兴它能帮上忙,祝你好运。
from urllib.request import urlopen

url = ''
html = urlopen(url)
bs = BeautifulSoup(html.read(), 'html.parser')

for i in bs.find_all('a'):
    print(i.get_text())