Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/unit-testing/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
<img src="//i.stack.imgur.com/RUiNP.png" height="16" width="18" alt="" class="sponsor tag img">elasticsearch 无法针对弹性搜索启动日志存储(org.elasticsearch.transport.ReceiveTimeoutTransportException)_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Logstash - Fatal编程技术网 elasticsearch 无法针对弹性搜索启动日志存储(org.elasticsearch.transport.ReceiveTimeoutTransportException),elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch 无法针对弹性搜索启动日志存储(org.elasticsearch.transport.ReceiveTimeoutTransportException),elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch 无法针对弹性搜索启动日志存储(org.elasticsearch.transport.ReceiveTimeoutTransportException)

elasticsearch 无法针对弹性搜索启动日志存储(org.elasticsearch.transport.ReceiveTimeoutTransportException),elasticsearch,logstash,elasticsearch,Logstash,我正在遵循的入门指南,但我不能使它对elasticsearch的工作 我的环境是Linux Fedora-logstash 1.4.2-elasticsearch 1.1.1 我启动弹性搜索并验证它是否正常: [2015-01-16 11:12:33,039][INFO ][transport ] [Adonis] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.

我正在遵循的入门指南,但我不能使它对elasticsearch的工作

我的环境是Linux Fedora-logstash 1.4.2-elasticsearch 1.1.1

我启动弹性搜索并验证它是否正常:

[2015-01-16 11:12:33,039][INFO ][transport                ] [Adonis] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.13.47:9300]}
[2015-01-16 11:12:36,171][INFO ][cluster.service          ] [Adonis] new_master [Adonis][SzTj0QJNSVOweE9Dd630BQ][arq.mycompany.org][inet[/192.168.13.47:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-16 11:12:36,190][INFO ][discovery                ] [Adonis] elasticsearch/SzTj0QJNSVOweE9Dd630BQ
[2015-01-16 11:12:36,208][INFO ][http                     ] [Adonis] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.13.47:9200]}
[2015-01-16 11:12:36,252][INFO ][gateway                  ] [Adonis] recovered [0] indices into cluster_state
[2015-01-16 11:12:36,252][INFO ][node                     ] [Adonis] started
卷曲“

使用netstat检查端口:

netstat -na | grep LIST | grep 93

tcp        0      0 0.0.0.0:59693               0.0.0.0:*                   LISTEN      
tcp        0      0 :::9300                     :::*                        LISTEN      
tcp        0      0 :::9301                     :::*                        LISTEN      
tcp        0      0 :::9302                     :::*                        LISTEN      
[2015-01-16 11:18:06,345][INFO ][cluster.service          ] [Adonis] added {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}])
[2015-01-16 11:18:10,453][INFO ][cluster.service          ] [Adonis] removed {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-node_failed([logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}), reason transport disconnected (with verified connect)
./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_3#][arq.mycompany.org][inet[localhost/127.0.0.1:9302]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9302]][discovery/zen/unicast] request_id [0] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_2#][arq.mycompany.org][inet[localhost/127.0.0.1:9301]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9301]][discovery/zen/unicast] request_id [3] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F771949B Handler2=F76F2915 InaccessibleAddress=00000012
EDI=F7777560 ESI=D2B42846 EAX=00000012 EBX=00000000
ECX=D545AE34 EDX=0000FFFF
EIP=F6578E1D ES=002B DS=002B ESP=D545ADF0
EFlags=00210206 CS=0023 SS=002B EBP=D12A8700
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F6533000 Symbol=sun_misc_Unsafe_getLong__Ljava_lang_Object_2J
Symbol_address=F6578DCC
Target=2_40_20110726_087724 (Linux 3.6.11-4.fc16.x86_64)
CPU=x86 (8 logical CPUs) (0x3e051c000 RAM)
----------- Stack Backtrace -----------
(0xF76E6752 [libj9prt24.so+0xb752])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E67E5 [libj9prt24.so+0xb7e5])
(0xF76E6908 [libj9prt24.so+0xb908])
(0xF76E6584 [libj9prt24.so+0xb584])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E65F8 [libj9prt24.so+0xb5f8])
(0xF771A1D3 [libj9vm24.so+0xf1d3])
(0xF7719E53 [libj9vm24.so+0xee53])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF771963B [libj9vm24.so+0xe63b])
(0xF76F2A8D [libj9prt24.so+0x17a8d])
(0xF77BE410)
---------------------------------------
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.31370.

JVMDUMP010I System dump written to /home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp
JVMDUMP032I JVM requested Java dump using '/home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt' in response to an event
JVMDUMP010I Java dump written to /home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt
JVMDUMP032I JVM requested Snap dump using '/home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc
JVMDUMP013I Processed dump event "gpf", detail "".
[MYUSER@cl004300l bin]$ 
针对标准输出的logstash测试运行良好:

bin/logstash -e 'input { stdin { } } output { stdout {} }'
但随后我尝试将输出设置为elasticsearch并获得一个异常

./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'
注意,首先我在弹性搜索中看到添加的日志,然后logstash失败,然后删除的日志显示在弹性搜索日志中

弹性搜索日志:

netstat -na | grep LIST | grep 93

tcp        0      0 0.0.0.0:59693               0.0.0.0:*                   LISTEN      
tcp        0      0 :::9300                     :::*                        LISTEN      
tcp        0      0 :::9301                     :::*                        LISTEN      
tcp        0      0 :::9302                     :::*                        LISTEN      
[2015-01-16 11:18:06,345][INFO ][cluster.service          ] [Adonis] added {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}])
[2015-01-16 11:18:10,453][INFO ][cluster.service          ] [Adonis] removed {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-node_failed([logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}), reason transport disconnected (with verified connect)
./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_3#][arq.mycompany.org][inet[localhost/127.0.0.1:9302]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9302]][discovery/zen/unicast] request_id [0] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_2#][arq.mycompany.org][inet[localhost/127.0.0.1:9301]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9301]][discovery/zen/unicast] request_id [3] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F771949B Handler2=F76F2915 InaccessibleAddress=00000012
EDI=F7777560 ESI=D2B42846 EAX=00000012 EBX=00000000
ECX=D545AE34 EDX=0000FFFF
EIP=F6578E1D ES=002B DS=002B ESP=D545ADF0
EFlags=00210206 CS=0023 SS=002B EBP=D12A8700
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F6533000 Symbol=sun_misc_Unsafe_getLong__Ljava_lang_Object_2J
Symbol_address=F6578DCC
Target=2_40_20110726_087724 (Linux 3.6.11-4.fc16.x86_64)
CPU=x86 (8 logical CPUs) (0x3e051c000 RAM)
----------- Stack Backtrace -----------
(0xF76E6752 [libj9prt24.so+0xb752])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E67E5 [libj9prt24.so+0xb7e5])
(0xF76E6908 [libj9prt24.so+0xb908])
(0xF76E6584 [libj9prt24.so+0xb584])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E65F8 [libj9prt24.so+0xb5f8])
(0xF771A1D3 [libj9vm24.so+0xf1d3])
(0xF7719E53 [libj9vm24.so+0xee53])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF771963B [libj9vm24.so+0xe63b])
(0xF76F2A8D [libj9prt24.so+0x17a8d])
(0xF77BE410)
---------------------------------------
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.31370.

JVMDUMP010I System dump written to /home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp
JVMDUMP032I JVM requested Java dump using '/home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt' in response to an event
JVMDUMP010I Java dump written to /home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt
JVMDUMP032I JVM requested Snap dump using '/home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc
JVMDUMP013I Processed dump event "gpf", detail "".
[MYUSER@cl004300l bin]$ 
似乎它添加了客户端,但随后断开了连接(???)

日志存储日志:

netstat -na | grep LIST | grep 93

tcp        0      0 0.0.0.0:59693               0.0.0.0:*                   LISTEN      
tcp        0      0 :::9300                     :::*                        LISTEN      
tcp        0      0 :::9301                     :::*                        LISTEN      
tcp        0      0 :::9302                     :::*                        LISTEN      
[2015-01-16 11:18:06,345][INFO ][cluster.service          ] [Adonis] added {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}])
[2015-01-16 11:18:10,453][INFO ][cluster.service          ] [Adonis] removed {[logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true},}, reason: zen-disco-node_failed([logstash-arq.mycompany.org-30982-2010][RaaZaGBwRcuVo4h48eD_yw][arq.mycompany.org][inet[/192.168.13.47:9304]]{data=false, client=true}), reason transport disconnected (with verified connect)
./logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_3#][arq.mycompany.org][inet[localhost/127.0.0.1:9302]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9302]][discovery/zen/unicast] request_id [0] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
log4j, [2015-01-16T11:25:40.750]  WARN: org.elasticsearch.discovery.zen.ping.unicast: [logstash-arq.mycompany.org-31286-2010] failed to send ping to [[#zen_unicast_2#][arq.mycompany.org][inet[localhost/127.0.0.1:9301]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9301]][discovery/zen/unicast] request_id [3] timed out after [3751ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
    at java.lang.Thread.run(Thread.java:736)
Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F771949B Handler2=F76F2915 InaccessibleAddress=00000012
EDI=F7777560 ESI=D2B42846 EAX=00000012 EBX=00000000
ECX=D545AE34 EDX=0000FFFF
EIP=F6578E1D ES=002B DS=002B ESP=D545ADF0
EFlags=00210206 CS=0023 SS=002B EBP=D12A8700
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F6533000 Symbol=sun_misc_Unsafe_getLong__Ljava_lang_Object_2J
Symbol_address=F6578DCC
Target=2_40_20110726_087724 (Linux 3.6.11-4.fc16.x86_64)
CPU=x86 (8 logical CPUs) (0x3e051c000 RAM)
----------- Stack Backtrace -----------
(0xF76E6752 [libj9prt24.so+0xb752])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E67E5 [libj9prt24.so+0xb7e5])
(0xF76E6908 [libj9prt24.so+0xb908])
(0xF76E6584 [libj9prt24.so+0xb584])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF76E65F8 [libj9prt24.so+0xb5f8])
(0xF771A1D3 [libj9vm24.so+0xf1d3])
(0xF7719E53 [libj9vm24.so+0xee53])
(0xF76F1F60 [libj9prt24.so+0x16f60])
(0xF771963B [libj9vm24.so+0xe63b])
(0xF76F2A8D [libj9prt24.so+0x17a8d])
(0xF77BE410)
---------------------------------------
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.31370.

JVMDUMP010I System dump written to /home/MYUSER/Software/logstash-1.4.2/bin/core.20150116.112541.31286.0001.dmp
JVMDUMP032I JVM requested Java dump using '/home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt' in response to an event
JVMDUMP010I Java dump written to /home/MYUSER/Software/logstash-1.4.2/bin/javacore.20150116.112541.31286.0002.txt
JVMDUMP032I JVM requested Snap dump using '/home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc' in response to an event
JVMDUMP010I Snap dump written to /home/MYUSER/Software/logstash-1.4.2/bin/Snap.20150116.112541.31286.0003.trc
JVMDUMP013I Processed dump event "gpf", detail "".
[MYUSER@cl004300l bin]$ 
如果我将协议更改为protocol=>http,elasticsearch将崩溃:

Unhandled exception
Type=Segmentation error vmState=0x00000000
J9Generic_Signal_Number=00000004 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000001
Handler1=F76B549B Handler2=F768E915 InaccessibleAddress=000001E6
EDI=F7713560 ESI=B38E163A EAX=0000001C EBX=B3526A00
ECX=B3F1F9CC EDX=000001B2
EIP=F64D1A40 ES=002B DS=002B ESP=B3F1F98C
EFlags=00210286 CS=0023 SS=002B EBP=B3D24B00
Module=/opt/IBM/SDP/jdk/jre/lib/i386/libjclscar_24.so
Module_base_address=F648A000 Symbol=sun_misc_Unsafe_putLong__Ljava_lang_Object_2JJ

JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP032I JVM requested System dump using '/home/MYUSER/Software/elasticsearch-1.1.1/bin/core.20150119.095615.5602.0001.dmp' in response to an event
JVMPORT030W /proc/sys/kernel/core_pattern setting "|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e" specifies that the core dump is to be piped to an external program.  Attempting to rename either core or core.5723.

*** glibc detected *** /opt/IBM/SDP/jdk/bin/java: malloc(): memory corruption: 0xb3f19da0 ***

我已经为此苦苦挣扎了好几天,因此我非常感谢您提供一些帮助或提示来制定解决方案。

让logstash连接到旧版本的elasticsearch有时会出现问题。您最好将
protocol=>http
添加到您的
elasticsearch
输出中,您的问题应该会得到解决。

Hi-Alcanzar也尝试将协议更改为http,但在这种情况下,elasticsearch崩溃,出现了一个丑陋的未处理异常,如更新的问题所示。您使用IBM jdk有什么原因吗?因为这看起来肯定是jdk崩溃了。我是针对Websphere开发的,这是系统默认设置。我将尝试一下Oracle的JDK,可能是IBM的错:-)