Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/355.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何进行干刮治疗?_Python_Python 3.x_Web Scraping_Dryscrape - Fatal编程技术网

Python 如何进行干刮治疗?

Python 如何进行干刮治疗?,python,python-3.x,web-scraping,dryscrape,Python,Python 3.x,Web Scraping,Dryscrape,我想在Mac上做一个DryScrap会话。我尝试运行的代码如下: import dryscrape session = dryscrape.Session(base_url = 'http://google.com') 但当我运行它时,我得到了以下权限错误: Traceback (most recent call last): File "<ipython-input-37-5e3204f25ebb>", line 3, in <module> sessi

我想在Mac上做一个DryScrap会话。我尝试运行的代码如下:

import dryscrape
session = dryscrape.Session(base_url = 'http://google.com')
但当我运行它时,我得到了以下权限错误:

Traceback (most recent call last):

  File "<ipython-input-37-5e3204f25ebb>", line 3, in <module>
    session = dryscrape.Session(base_url = 'http://google.com')

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/session.py", line 22, in __init__
    self.driver = driver or DefaultDriver()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/driver/webkit.py", line 30, in __init__
    super(Driver, self).__init__(**kw)

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 230, in __init__
    self.conn = connection or ServerConnection()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 507, in __init__
    self._sock = (server or get_default_server()).connect()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 450, in get_default_server
    _default_server = Server()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 416, in __init__
    stderr = subprocess.PIPE)

  File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)

  File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)

PermissionError: [Errno 13] Permission denied
回溯(最近一次呼叫最后一次):
文件“”,第3行,在
session=drysrap.session(基本url=)http://google.com')
文件“/Users/MyName/anaconda/lib/python3.5/site packages/drysrape/session.py”,第22行,在__
self.driver=driver或DefaultDriver()
文件“/Users/MyName/anaconda/lib/python3.5/site packages/drysrape/driver/webkit.py”,第30行,在__
超级(驱动器,自身)。\uuuu初始功率(**kw)
文件“/Users/MyName/anaconda/lib/python3.5/site packages/webkit_server.py”,第230行,在__
self.conn=连接或服务器连接()
文件“/Users/MyName/anaconda/lib/python3.5/site packages/webkit_server.py”,第507行,在__
self.\u sock=(服务器或获取默认服务器()).connect()
文件“/Users/MyName/anaconda/lib/python3.5/site packages/webkit_server.py”,第450行,在get_default_server中
_默认服务器=服务器()
文件“/Users/MyName/anaconda/lib/python3.5/site packages/webkit_server.py”,第416行,在__
stderr=子流程(管道)
文件“/Users/MyName/anaconda/lib/python3.5/subprocess.py”,第947行,在__
恢复信号,启动新会话)
文件“/Users/MyName/anaconda/lib/python3.5/subprocess.py”,第1551行,在执行子进程中
引发子项异常类型(errno\u num、err\u msg)
PermissionError:[Errno 13]权限被拒绝
我试着用sudo在终端上运行它,但仍然得到相同的错误。谢谢你的帮助!注:我将对所有答案进行投票,并接受最好的答案。

我有以下工作:

# scrape.py
import dryscrape

s = dryscrape.Session()
s.visit("https://www.google.com/search?q={}".format('query'))
print(s.body().encode("utf-8"))
应该打印html

我这样做:

python scrape.py > results.html

然后在浏览器中打开results.html以检查

这是文档中的一个非常基本的示例

import dryscrape
import sys

if 'linux' in sys.platform:
    # start xvfb in case no X is running. Make sure xvfb 
    # is installed, otherwise this won't work!
    dryscrape.start_xvfb()

search_term = 'dryscrape'

# set up a web scraping session
sess = dryscrape.Session(base_url = 'http://google.com')

# we don't need images
sess.set_attribute('auto_load_images', False)

# visit homepage and search for a term
sess.visit('/')
q = sess.at_xpath('//*[@name="q"]')
q.set(search_term)
q.form().submit()

# extract all links
for link in sess.xpath('//a[@href]'):
  print(link['href'])

# save a screenshot of the web page
sess.render('google.png')
print("Screenshot written to 'google.png'")

我仍然收到
权限错误:[Errno 13]权限被拒绝
仍然收到权限错误:(