无法从Hadoop HDFS加载文件

无法从Hadoop HDFS加载文件,hadoop,apache-pig,hdfs,Hadoop,Apache Pig,Hdfs,我在尝试从文件加载csv时遇到问题。我不断发现以下错误: Input(s): Failed to read data from "hdfs://localhost:9000/user/der/1987.csv" Output(s): Failed to produce result in "hdfs://localhost:9000/user/der/totalmiles3" Failed Jobs: JobId Alias Feature Mess

我在尝试从文件加载csv时遇到问题。我不断发现以下错误:

Input(s):
Failed to read data from "hdfs://localhost:9000/user/der/1987.csv"

Output(s):
Failed to produce result in                 "hdfs://localhost:9000/user/der/totalmiles3"
Failed Jobs: 
  JobId Alias   Feature Message Outputs 
    job_local602774674_0001 milage_recs,records,tot_miles           
GROUP_BY,COMBINER   Message: ENOENT: No such file or directory 

        at  
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native     Method) 

    at        
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230) 

at         org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.j
    ava:724) 
    at     


org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:    502) 
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSys   tem.java:600) 
    at     
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUpl
    oader.java:94) 
        at      
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte
   r.java:98) 
    at      org .apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:193) 

...blah...

Input(s): 
Failed to read data from "/user/der/1987.csv" 

Output(s): 
Failed to produce result in "hdfs://localhost:9000/user/der/totalmiles3" 
查看本地机器上安装的Hadoop hdfs,我看到了该文件。事实上,该文件位于多个位置,如/、/user/等

hdfs dfs -ls /user/der
Found 1 items
-rw-r--r--   1 der supergroup  127162942 2015-05-28 12:42 
/user/der/1987.csv
我的猪脚本如下:

records = LOAD '1987.csv' USING PigStorage(',') AS
       (Year, Month, DayofMonth, DayOfWeek, DepTime, CRSDepTime, ArrTime,
         CRSArrTime, UniqueCarrier, FlightNum, TailNum,ActualElapsedTime,
      CRSElapsedTime,AirTime,ArrDelay, DepDelay, Origin, Dest,
         Distance:int, TaxIn, TaxiOut, Cancelled,CancellationCode,
          Diverted, CarrierDelay, WeatherDelay, NASDelay,     SecurityDelay,
     lateAircraftDelay);
milage_recs= GROUP records ALL;
tot_miles = FOREACH milage_recs GENERATE SUM(records.Distance);
STORE tot_miles INTO 'totalmiles3';
<property>
      <name>hadoop.tmp.dir</name>
     <value>/usr/local/hadoop</value>
      <description>A base for other temporary directories.    
</description>
 </property>
 <property>
     <name>dfs.namenode.name.dir</name>  
     <value>file:/usr/local/hadoop/dfs/namenode</value>
  </property>





 <property> 
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/dfs/datanode</value>
    </property>
我用-x本地选项运行pig。我可以使用-x local选项从本地硬盘读取文件。得到正确答案,Hadoop namenode上的尾部-f没有滚动,这证明我在硬盘上本地运行了所有文件:

pig  -x local totalmiles.pig
现在我犯了错误。hadoop名称服务器似乎收到了请求,因为我使用了tail-f并看到了日志滚动

pig totalmiles.pig

records = LOAD '/user/der/1987.csv' USING PigStorage(',') AS
我得到以下错误:

Input(s):
Failed to read data from "hdfs://localhost:9000/user/der/1987.csv"

Output(s):
Failed to produce result in                 "hdfs://localhost:9000/user/der/totalmiles3"
Failed Jobs: 
  JobId Alias   Feature Message Outputs 
    job_local602774674_0001 milage_recs,records,tot_miles           
GROUP_BY,COMBINER   Message: ENOENT: No such file or directory 

        at  
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native     Method) 

    at        
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230) 

at         org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.j
    ava:724) 
    at     


org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:    502) 
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSys   tem.java:600) 
    at     
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUpl
    oader.java:94) 
        at      
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte
   r.java:98) 
    at      org .apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:193) 

...blah...

Input(s): 
Failed to read data from "/user/der/1987.csv" 

Output(s): 
Failed to produce result in "hdfs://localhost:9000/user/der/totalmiles3" 
我使用hdfs检查mkdir的权限,这似乎还可以:

hdfs dfs -mkdir /user/der/temp2 
hdfs dfs -ls /user/der 

Found 3 items 
-rw-r--r--   1 der supergroup  127162942 2015-05-28 12:42  
/user/der/1987.csv 
drwxr-xr-x   - der supergroup          0 2015-05-28 16:21     
/user/der/temp2 
drwxr-xr-x   - der supergroup          0 2015-05-28 15:57 
/user/der/test 
我使用mapreduce选项尝试了pig,但仍然得到相同类型的错误:

 pig -x mapreduce totalmiles.pig

 5-05-28 20:58:44,608 [JobControl] INFO  
  org.apache.hadoop.mapreduce.lib.jobc
    ontrol.ControlledJob - PigLatin:totalmiles.pig while            
  submitting 

    ENOENT: No such file or directory
        at 
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Na       at         
org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at 

org.apache.hadoop.fs.RawLocalFileSystem.setPermissi     at  
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSy
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:600)
at     
org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(Job
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(Jo
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobS
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
My
core site.xml
的temp
dir
如下所示:

records = LOAD '1987.csv' USING PigStorage(',') AS
       (Year, Month, DayofMonth, DayOfWeek, DepTime, CRSDepTime, ArrTime,
         CRSArrTime, UniqueCarrier, FlightNum, TailNum,ActualElapsedTime,
      CRSElapsedTime,AirTime,ArrDelay, DepDelay, Origin, Dest,
         Distance:int, TaxIn, TaxiOut, Cancelled,CancellationCode,
          Diverted, CarrierDelay, WeatherDelay, NASDelay,     SecurityDelay,
     lateAircraftDelay);
milage_recs= GROUP records ALL;
tot_miles = FOREACH milage_recs GENERATE SUM(records.Distance);
STORE tot_miles INTO 'totalmiles3';
<property>
      <name>hadoop.tmp.dir</name>
     <value>/usr/local/hadoop</value>
      <description>A base for other temporary directories.    
</description>
 </property>
 <property>
     <name>dfs.namenode.name.dir</name>  
     <value>file:/usr/local/hadoop/dfs/namenode</value>
  </property>





 <property> 
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/dfs/datanode</value>
    </property>
我在调试这个问题上做得更进一步了。似乎我的namenode配置错误,因为我无法重新格式化它:


[]

我们必须将hadoop文件路径设置为:/user/der/1987.csv

 records = LOAD '/user/der/1987.csv' USING PigStorage(',') AS
   (Year, Month, DayofMonth, DayOfWeek, DepTime, CRSDepTime, ArrTime,
     CRSArrTime, UniqueCarrier, FlightNum, TailNum,ActualElapsedTime,
  CRSElapsedTime,AirTime,ArrDelay, DepDelay, Origin, Dest,
     Distance:int, TaxIn, TaxiOut, Cancelled,CancellationCode,
      Diverted, CarrierDelay, WeatherDelay, NASDelay,     SecurityDelay,
 lateAircraftDelay);

如果用于测试,您可以将文件1987.csv放在执行pig脚本的路径中,即将1987.csv和.pig文件放在同一位置。

感谢您的及时响应。如果你的意思是我可以把pig文件放在1987.csv的地方进行测试,我就这么做了。pig-x本地总英里数。pig。得到正确答案,Hadoop namenode上的尾部-f没有滚动,这证明我在硬盘上本地运行了所有文件。现在我运行了以下更改:pig totalmiles.pig\n records=LOAD'/user/der/1987.csv',使用PigStorage(','),因为我得到以下错误:失败的作业:JobId Alias功能消息输出。错误是mkdir权限处的Pig latin上的Java异常。hadoop/logs/namenode.og会滚动显示,这似乎很好。hdfs dfs-mkdir/user/der/temp2证明了hdfs上的权限是正确的。@Derek:Plz。使用加载以进行分析时看到的错误消息更新问题。能否尝试使用
pig-x MapReduce
在MapReduce模式下更改并运行作业。请检查运行pig命令的目录,该目录权限在此处很重要