Hadoop 从PIG脚本运行时,PIG未从hdfs读取文件

Hadoop 从PIG脚本运行时,PIG未从hdfs读取文件,hadoop,apache-pig,Hadoop,Apache Pig,我正在尝试使用pigscript从hdfs加载文件 data = LOAD '/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double); 上面提到的路径是hdfs路径。 当我使用pig grunt运行相同的脚本时,它的执行没有任何问题,但是使用

我正在尝试使用pigscript从hdfs加载文件

data = LOAD '/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);
上面提到的路径是hdfs路径。 当我使用pig grunt运行相同的脚本时,它的执行没有任何问题,但是使用脚本的相同代码显示了以下问题:

输入: 无法从“/user/Z013W7X/typeahead/time\u decaded\u clickdata.tsv”读取数据

这是我用来调用pig脚本的shell脚本

jar_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/libs/takeygen-0.0.1-SNAPSHOT-jar-with-dependencies.jar
scripts_path=/home_dir/z013w7x/workspace/tapipeline/Typeahead-APP/tapipeline/pig_scripts/daily_running_scripts
dataset_path=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X/typeahead
data_files=/user/Z013W7X/typeahead/data_files.zip#data
ngrams_gen_script=$scripts_path/generate_ngrams.pig
time_decayed_clickdata_file=$dataset_path/time_decayed_clickdata.tsv
all_suggestions_file=$results_path/all_suggestions.tsv
top_suggestions_file=$results_path/top_suggestions.tsv

pig -f $ngrams_gen_script -param "INPUT_TIME_DECAYED_CLICKDATA_FILE=$time_decayed_clickdata_file" -param "OUTPUT_ALL_SUGGESTIONS_FILE=$all_suggestions_file" -param "OUTPUT_TOP_SUGGESTIONS_FILE=$top_suggestions_file" -param "REGISTER=$jar_path" -param "INPUT_DATA_ARCHIVE=$data_files"
猪的脚本如下-

SET mapred.create.symlink yes
SET mapred.cache.archives $INPUT_DATA_ARCHIVE

register $REGISTER
click_data = LOAD '$INPUT_TIME_DECAYED_CLICKDATA_FILE' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);
ordered_click_data = order click_data by search_count desc;
sample_data = LIMIT ordered_click_data 3000000;
mclick_data = foreach sample_data generate keyword, CEIL(search_count) as search_count, CEIL(clicks) as clicks, CEIL(cartadds) as cartadds;
fclick_data = filter mclick_data by (keyword is not null and search_count is not null and keyword != 'NULL' );

ngram_data = foreach fclick_data generate flatten(com.tgt.search.typeahead.takeygen.udf.NGramScore(keyword, search_count, clicks, cartadds))
 as (stemmedKeyword:chararray, keyword:chararray, dscore:double, isUserQuery:int, contrib:double, keyscore:chararray);

grouped_data = group ngram_data by stemmedKeyword;
agg_data = foreach grouped_data generate group, flatten(com.tgt.search.typeahead.takeygen.udf.StemmedKeyword(ngram_data.keyscore)) as keyword,
                                                                                                                 SUM(ngram_data.dscore) as ascore, SUM(ngram_data.isUserQuery) as isUserQuery, SUM(ngram_data.contrib) as contrib;
filter_queries = filter agg_data by isUserQuery > 0;
all_suggestions = foreach  filter_queries generate keyword, ascore;
ordered_suggestions = order all_suggestions by ascore desc;
top_suggestions = limit ordered_suggestions 200000;

rmf /tmp/all_suggestions
rmf $OUTPUT_ALL_SUGGESTIONS_FILE
rmf /tmp/top_suggestions
rmf $OUTPUT_TOP_SUGGESTIONS_FILE

store ordered_suggestions  into '/tmp/all_suggestions' using PigStorage('\t','-schema');
store top_suggestions  into '/tmp/top_suggestions' using PigStorage('\t','-schema');
cp /tmp/all_suggestions/part-r-00000 $OUTPUT_ALL_SUGGESTIONS_FILE
cp /tmp/top_suggestions/part-r-00000 $OUTPUT_TOP_SUGGESTIONS_FILE

您需要添加
hdfs://namenode_host:54310
在输入路径之前。试试下面

data = LOAD 'hdfs://namenode_host:54310/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);

您需要添加
hdfs://namenode_host:54310
在输入路径之前。试试下面

data = LOAD 'hdfs://namenode_host:54310/user/Z013W7X/typeahead/time_decayed_clickdata.tsv' using PigStorage('\t') as (keyword :chararray , search_count: double, clicks: double, cartadds: double);

你是如何运行脚本的?我是从shell脚本运行脚本确保你没有在本地模式下运行pig脚本。不,不是这样。。。。读取输入文件时出现问题..能否尝试将“data_files=/user/Z013W7X/typeahead/data_files.zip#data”替换为“data_files”=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X/typeahead/data_files.zip#data"?您是如何运行脚本的?我是从shell脚本运行脚本确保您没有在本地模式下运行pig脚本。不,不是这样的。。。。读取输入文件时出现问题..能否尝试将“data_files=/user/Z013W7X/typeahead/data_files.zip#data”替换为“data_files”=hdfs://d-3zkyk02.target.com:8020/user/Z013W7X/typeahead/data_files.zip#data“?现在试试。更新。您需要添加namenode主机和端口。路径是否正确。请检查“hdfs dfs-ls”。当我直接使用它时,相同的路径正在运行。。只有当我在脚本中使用它时,它才会显示问题..你到底在运行什么,你能把它粘贴到问题中吗?现在试试。更新。您需要添加namenode主机和端口。路径是否正确。请检查“hdfs dfs-ls”。当我直接使用它时,相同的路径正在运行。。只有当我在脚本中使用它时,它才会显示问题。你到底在运行什么,你能把它粘贴到问题中吗。