C++ Hadoop C++;HDFS测试运行异常
我正在使用Hadoop 2.2.0并尝试运行这个hdfs\u test.cpp应用程序:C++ Hadoop C++;HDFS测试运行异常,c++,exception,hadoop,hdfs,C++,Exception,Hadoop,Hdfs,我正在使用Hadoop 2.2.0并尝试运行这个hdfs\u test.cpp应用程序: #include "hdfs.h" int main(int argc, char **argv) { hdfsFS fs = hdfsConnect("default", 0); const char* writePath = "/tmp/testfile.txt"; hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRO
#include "hdfs.h"
int main(int argc, char **argv) {
hdfsFS fs = hdfsConnect("default", 0);
const char* writePath = "/tmp/testfile.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile) {
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, World!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) {
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}
我编译了它,但当我使用/hdfs\u test运行它时,我有以下内容:
loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!
可能是类路径有问题。
我的$HADOOP_主页是/usr/local/HADOOP,这实际上是我的变量*CLASSPATH*:
echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
谢谢你的帮助。。谢谢在使用基于JNI的程序时,我遇到了在类路径中使用通配符的问题。尝试类路径中的直接jar方法,比如我在的这个示例代码中生成的方法,我相信它应该可以工作。所包含的整个示例目前确实有效 另请参见尝试以下操作:
hadoop classpath --glob
然后将结果添加到
~/.bashrc
中的CLASSPATH
变量中。因此,简单地添加hadoop类路径--glob的结果是行不通的。
正确的方法是:
export CLASSPATH=${HADOOP_HOME}/etc/hadoop:`find ${HADOOP_HOME}/share/hadoop/ | awk '{path=path":"$0}END{print path}'`
export LD_LIBRARY_PATH="${HADOOP_HOME}/lib/native":$LD_LIBRARY_PATH
我遇到了通过以下方法解决的相同问题:
export CLASSPATH=$(对于hadoop类路径中的p--glob | sed s'/://g';dofind$p-name'*.jar'2>/dev/null;done | tr'\n'':')