Hadoop sqoop,选择特定列

Hadoop sqoop,选择特定列,hadoop,sqoop,Hadoop,Sqoop,在sqoop语句中,是否有一个规定,我们只能从oracle端选择特定的列 1:工作 2:失败 您可以使用--columns--table--where子句来实现它。样本如下: sqoop import --connect jdbc:oracle:thin:@server1.companyxyz.com:4567/prod/DATABASE=schema1 --username xyz --password xyz --table customers --columns cust_id,

在sqoop语句中,是否有一个规定,我们只能从oracle端选择特定的列

1:工作 2:失败 您可以使用--columns--table--where子句来实现它。样本如下:

sqoop import  
--connect jdbc:oracle:thin:@server1.companyxyz.com:4567/prod/DATABASE=schema1
--username xyz 
--password xyz 
--table customers
--columns cust_id, name, address, date, history, occupation  
--where item>=1234 
--target-dir /tmp//customers
--m 8
--split-by cust_id
--fields-terminated-by , 
--escaped-by \ 
--hive-drop-import-delims  
--map-column-java
  cust_id=string, name=string, address=string, date=string, history=string, occupation=string

我怀疑
从schema1中选择客户id、姓名、地址、日期、历史记录、职业。项目>=1234
不正确的客户。我尝试了所有可能的情景。尝试在数据库中运行它。在运行第二条语句之前,您还删除了directory/tmp/customers。您还应该粘贴错误

sqoop import \
     --connect "jdbc:mysql://sandbox.hortonworks.com:3306/retail_db" \
     --username=retail_dba \
     --password=hadoop \
     --query "select department_id, department_name from departments where \$CONDITIONS" \
     --target-dir /user/root//testing \
     --split-by department_id \
     --outdir java_files \
     --hive-drop-import-delims \
     -m 8 \
     --fields-terminated-by , \
     --escaped-by '\'
对于--columns,我放了“cust_id,name,…等”,对于--where和it worked也放了同样的内容。谢谢
sqoop import  
--connect jdbc:oracle:thin:@server1.companyxyz.com:4567/prod/DATABASE=schema1
--username xyz 
--password xyz 
--table customers
--columns cust_id, name, address, date, history, occupation  
--where item>=1234 
--target-dir /tmp//customers
--m 8
--split-by cust_id
--fields-terminated-by , 
--escaped-by \ 
--hive-drop-import-delims  
--map-column-java
  cust_id=string, name=string, address=string, date=string, history=string, occupation=string
sqoop import \
     --connect "jdbc:mysql://sandbox.hortonworks.com:3306/retail_db" \
     --username=retail_dba \
     --password=hadoop \
     --query "select department_id, department_name from departments where \$CONDITIONS" \
     --target-dir /user/root//testing \
     --split-by department_id \
     --outdir java_files \
     --hive-drop-import-delims \
     -m 8 \
     --fields-terminated-by , \
     --escaped-by '\'