使用Python从HDFS读取文件时出现连接超时错误

使用Python从HDFS读取文件时出现连接超时错误,python,hadoop,hdfs,Python,Hadoop,Hdfs,我在VM中创建了一个单节点HDFS(hadoop.master,IP:192.168.12.52)。文件etc/hadoop/core site.xml具有以下名称节点配置: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master.hadoop:9000/</value> </property> </co

我在VM中创建了一个单节点HDFS(
hadoop.master
,IP:
192.168.12.52
)。文件
etc/hadoop/core site.xml
具有以下名称节点配置:

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://master.hadoop:9000/</value>
 </property>
</configuration>
现在,当我运行它时,我得到以下超时错误:

$ python3 hdfs_read.py 
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 137, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 91, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 81, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 162, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 146, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='192.168.12.52', port=9000): Max retries exceeded with url: /webhdfs/v1/home/edhuser/testdata.txt?user.name=embs&offset=0&op=OPEN (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "hdfs_read_local.py", line 3, in <module>
    with client.read('/home/edhuser/testdata.txt') as reader:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 678, in read
    buffersize=buffer_size,
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 118, in api_handler
    raise err
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 107, in api_handler
    **self.kwargs
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 207, in _request
    **kwargs
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.12.52', port=9000): Max retries exceeded with url: /webhdfs/v1/home/edhuser/testdata.txt?user.name=embs&offset=0&op=OPEN (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host',))
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
    from apport.fileutils import likely_packaged, get_recent_crashes
  File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
    from apport.report import Report
  File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in <module>
    import apport.fileutils
  File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in <module>
    from apport.packaging_impl import impl as packaging
  File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 23, in <module>
    import apt
  File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in <module>
    import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'

Original exception was:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 137, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 91, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 81, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 162, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 146, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='192.168.12.52', port=9000): Max retries exceeded with url: /webhdfs/v1/home/edhuser/testdata.txt?user.name=embs&offset=0&op=OPEN (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "hdfs_read.py", line 3, in <module>
    with client.read('/home/edhuser/testdata.txt') as reader:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 678, in read
    buffersize=buffer_size,
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 118, in api_handler
    raise err
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 107, in api_handler
    **self.kwargs
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 207, in _request
    **kwargs
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.12.52', port=9000): Max retries exceeded with url: /webhdfs/v1/home/edhuser/testdata.txt?user.name=embs&offset=0&op=OPEN (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2d88cef2b0>: Failed to establish a new connection: [Errno 113] No route to host',))
但我知道HDFS是独立的,也许我以前通过执行
HDFS-dfs-get/test\u storage/
将HDFS的内容复制到本地计算机中,这就是它显示这些文件的原因。但是当我在namenode的路径中搜索文件时,它返回一些难以辨认的文件:

$ls /opt/volume/namenode/current/
edits_0000000000000000001-0000000000000000002
edits_0000000000000000003-0000000000000000010
edits_0000000000000000011-0000000000000000012
edits_0000000000000000013-0000000000000000015
edits_0000000000000000016-0000000000000000023
edits_0000000000000000024-0000000000000000025
edits_0000000000000000026-0000000000000000032
edits_0000000000000000033-0000000000000000033
edits_0000000000000000034-0000000000000000035
edits_0000000000000000036-0000000000000000037
edits_0000000000000000038-0000000000000000039
edits_0000000000000000040-0000000000000000041
edits_0000000000000000042-0000000000000000043
edits_0000000000000000044-0000000000000000045
edits_0000000000000000046-0000000000000000047
edits_0000000000000000048-0000000000000000049
edits_0000000000000000050-0000000000000000051
edits_0000000000000000052-0000000000000000053
edits_0000000000000000054-0000000000000000055
edits_0000000000000000056-0000000000000000057
edits_0000000000000000058-0000000000000000059
edits_0000000000000000060-0000000000000000061
edits_0000000000000000062-0000000000000000063
edits_0000000000000000064-0000000000000000065
edits_0000000000000000066-0000000000000000067
edits_0000000000000000068-0000000000000000070
edits_0000000000000000071-0000000000000000072
edits_0000000000000000073-0000000000000000074
edits_0000000000000000075-0000000000000000076
edits_0000000000000000077-0000000000000000078
edits_inprogress_0000000000000000079
fsimage_0000000000000000076
fsimage_0000000000000000076.md5
fsimage_0000000000000000078
fsimage_0000000000000000078.md5
seen_txid
VERSION
那么,如果我指定了错误读取的文件路径,那么正确的文件路径是什么

编辑:将端口更改为50070时(即,
client=unsecureclient('shttp://192.168.12.52:50070“)
),我得到以下错误:

$ python3 hdfs_read_local.py 
Traceback (most recent call last):
  File "hdfs_read.py", line 3, in <module>
    with client.read('/opt/hadoop/LICENSE.txt') as reader:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 678, in read
    buffersize=buffer_size,
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 112, in api_handler
    raise err
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 107, in api_handler
    **self.kwargs
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 210, in _request
    _on_error(response)
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 50, in _on_error
    raise HdfsError(message, exception=exception)
hdfs.util.HdfsError: File /opt/hadoop/LICENSE.txt not found.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 137, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 91, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 81, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 162, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 146, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0x7f2e87867400>: Failed to establish a new connection: [Errno 113] No route to host

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='pr2.embs', port=50075): Max retries exceeded with url: /webhdfs/v1/test_storage/LICENSE.txt?op=OPEN&user.name=embs&namenoderpcaddress=192.168.12.52:9000&offset=0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2e87867400>: Failed to establish a new connection: [Errno 113] No route to host',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "hdfs_read_local.py", line 3, in <module>
    with client.read('/test_storage/LICENSE.txt') as reader:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 678, in read
    buffersize=buffer_size,
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 118, in api_handler
    raise err
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 107, in api_handler
    **self.kwargs
  File "/home/embs/.local/lib/python3.6/site-packages/hdfs/client.py", line 207, in _request
    **kwargs
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 597, in send
    history = [resp for resp in gen] if allow_redirects else []
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 597, in <listcomp>
    history = [resp for resp in gen] if allow_redirects else []
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 195, in resolve_redirects
    **adapter_kwargs
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='pr2.embs', port=50075): Max retries exceeded with url: /webhdfs/v1/test_storage/LICENSE.txt?op=OPEN&user.name=embs&namenoderpcaddress=192.168.12.52:9000&offset=0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f2e87867400>: Failed to establish a new connection: [Errno 113] No route to host',))
http://192.168.12.52:9000

9000是一个RPC端口。50070是默认的HTTP WebHDFS端口

如果WebHDFS被禁用,或者由于端口50075(datanode http地址)关闭,datanode未公开端口50075(datanode http地址),或者您更改了该属性,则可能会获得到主机的路由

client.read('/opt/hadoop/LICENSE.txt')

您正在伪分布式模式下运行HDFS,但您正在尝试读取本地文件<默认情况下,HDFS中不存在code>/opt,您只运行了本地
ls
。。。相反,您应该使用hadoop fs-ls/opt查看您试图打开的路径上确实存在哪些文件

但是当我在namenode的路径中搜索文件时,它返回一些难以辨认的文件:

$ls /opt/volume/namenode/current/
edits_0000000000000000001-0000000000000000002
edits_0000000000000000003-0000000000000000010
edits_0000000000000000011-0000000000000000012
edits_0000000000000000013-0000000000000000015
edits_0000000000000000016-0000000000000000023
edits_0000000000000000024-0000000000000000025
edits_0000000000000000026-0000000000000000032
edits_0000000000000000033-0000000000000000033
edits_0000000000000000034-0000000000000000035
edits_0000000000000000036-0000000000000000037
edits_0000000000000000038-0000000000000000039
edits_0000000000000000040-0000000000000000041
edits_0000000000000000042-0000000000000000043
edits_0000000000000000044-0000000000000000045
edits_0000000000000000046-0000000000000000047
edits_0000000000000000048-0000000000000000049
edits_0000000000000000050-0000000000000000051
edits_0000000000000000052-0000000000000000053
edits_0000000000000000054-0000000000000000055
edits_0000000000000000056-0000000000000000057
edits_0000000000000000058-0000000000000000059
edits_0000000000000000060-0000000000000000061
edits_0000000000000000062-0000000000000000063
edits_0000000000000000064-0000000000000000065
edits_0000000000000000066-0000000000000000067
edits_0000000000000000068-0000000000000000070
edits_0000000000000000071-0000000000000000072
edits_0000000000000000073-0000000000000000074
edits_0000000000000000075-0000000000000000076
edits_0000000000000000077-0000000000000000078
edits_inprogress_0000000000000000079
fsimage_0000000000000000076
fsimage_0000000000000000076.md5
fsimage_0000000000000000078
fsimage_0000000000000000078.md5
seen_txid
VERSION
您的文件未存储在namenode中。。。他们的元数据是

您的文件存储在datanode数据目录中,但作为块而不是人类可读的内容

可以运行此命令以获取所有块及其位置的列表

hdfs fsck /path/to/file.txt -files -blocks
如本文所述,此python库使用webhdfs。如果要测试主机和文件路径是否正确,可以使用以下命令
curl-i'http://192.168.12.52:50070/webhdfs/v1/?op=LISTSTATUS“
。这将在hdfs中列出一个目录。如果正确的话,可以在python中使用相同的“配置”

from hdfs import InsecureClient
client = InsecureClient('http://192.168.12.52:50070')
with client.read('<hdfs_path>') as reader:
    features = reader.read()
    print(features)
从hdfs导入不安全客户端
客户端=不安全的客户端('http://192.168.12.52:50070')
使用客户端。读取(“”)作为读卡器:
features=reader.read()
打印(功能)

网络配置可能有问题。请暂时尝试以下经过调整的代码:

from hdfs import InsecureClient
client = InsecureClient('http://0.0.0.0:50070')
with client.read('/test-storage/LICENSE.txt') as reader:
    features = reader.read()
    print(features)

阅读有关IP地址的文章

嗨,我面临着类似的问题。看起来波特是对的。在我的案例中,我能够得到目录列表,但无法写入任何数据。问题是我的vpn阻塞了一些端口,读写使用不同的端口

9000不是HTTP端口。。。您需要
hdfs://
但是,错误提到了WebHDFS,您需要启用它separately@cricket_007当我将
http
更改为
hdfs
InvalidSchema:找不到的连接适配器时,出现此错误'hdfs://192.168.12.52:9000/webhdfs/v1/opt/hadoop/LICENSE.txt“
我不知道这个库使用的是WebHDFS,所以是的,它需要是HTTP。但是在端口50700Ok上,这样做之后,我得到了一个新的错误,我在问题细节中更新了这个错误。下一步我能试试什么?是不是我在python代码中给出了错误的文件路径,正如我在前面的问题详细信息中所怀疑的那样?当我执行
hdfs dfs-ls/test\u storage/
时,我在问题详细信息的hdfs web屏幕截图中看到了这两个文件,而没有看到其他文件。除了
测试存储
,HDFS中没有任何其他文件夹。正如我前面所说,我知道我输入的文件路径可能是错误的。但是我不知道在这段python代码中应该给出什么正确的HDFS文件路径,以便可以从其他计算机读取HDFS文件,这就是我需要帮助的地方。。。错误的唯一原因是给定的路径不存在,因此我将文件路径修改为
/test\u storage/LICENSE.txt
,现在它给出了一个连接错误。在python脚本中,我输入的端口是50070,但在错误中它在某处显示端口9000,我不知道为什么。我在问题细节中显示了编辑2中的确切错误。是的,没错!我不知道他们是从哪里来的。我没有在任何xml配置文件中编写这些。端口50075。谷歌搜索时,它似乎是datanode的端口,而pr2.embs是服务器192.168.12.52的名称。但是,我也不知道hadoop的名字是从哪里来的,因为我没有在它的任何xml配置文件中输入它,我仔细检查了一下。听起来好像你的DNS或DHCP服务器有一个缓存条目,或者你的/etc/hosts文件有冲突entries@Kristada673如果上述答案对您有效,请您发表意见好吗?谢谢
from hdfs import InsecureClient
client = InsecureClient('http://0.0.0.0:50070')
with client.read('/test-storage/LICENSE.txt') as reader:
    features = reader.read()
    print(features)