Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/70.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mysql 在HDFS-beugs中使用sqoop导入数据_Mysql_Hadoop_Hdfs_Cloudera_Sqoop - Fatal编程技术网

Mysql 在HDFS-beugs中使用sqoop导入数据

Mysql 在HDFS-beugs中使用sqoop导入数据,mysql,hadoop,hdfs,cloudera,sqoop,Mysql,Hadoop,Hdfs,Cloudera,Sqoop,我遵循这个教程。我已经使用cloudera manager安装了hadoop服务(hdfs、hive、sqoop、hue等等)。 我正在使用Ubuntu12.04TLS。 当尝试将数据从Mysql导入HDFS时,mapreduce作业会花费无限的时间而不会返回任何错误。知道导入的表有4列10行 我就是这么做的: sqoop import --connect jdbc:mysql://localhost/employees --username hadoop --password pass

我遵循这个教程。我已经使用cloudera manager安装了hadoop服务(hdfs、hive、sqoop、hue等等)。 我正在使用Ubuntu12.04TLS。 当尝试将数据从Mysql导入HDFS时,mapreduce作业会花费无限的时间而不会返回任何错误。知道导入的表有4列10行

我就是这么做的:

    sqoop import --connect jdbc:mysql://localhost/employees --username hadoop --password password --table departments -m 1 --target-dir /user/sqoop2/sqoop-mysql/department

    Warning: /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
    Please set $ACCUMULO_HOME to the root of your Accumulo installation.
    16/02/23 17:49:09 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.5.2
    16/02/23 17:49:09 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
    16/02/23 17:49:10 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
    16/02/23 17:49:10 INFO tool.CodeGenTool: Beginning code generation
    16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement:  SELECT t.* FROM `departments` AS t LIMIT 1
    16/02/23 17:49:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
    16/02/23 17:49:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
    Note: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.java uses or overrides a deprecated API.
    Note: Recompile with -Xlint:deprecation for details.
    16/02/23 17:49:19 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/6bdeb198a0c249392703e3fc0070cb64/departments.jar
    16/02/23 17:49:19 WARN manager.MySQLManager: It looks like you are importing from mysql.
    16/02/23 17:49:19 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
    16/02/23 17:49:19 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
    16/02/23 17:49:19 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
    16/02/23 17:49:19 INFO mapreduce.ImportJobBase: Beginning import of departments
    16/02/23 17:49:20 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
    16/02/23 17:49:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
    16/02/23 17:49:24 INFO client.RMProxy: Connecting to ResourceManager at hadoopUser/10.0.2.15:8032
    16/02/23 17:49:31 INFO db.DBInputFormat: Using read commited transaction isolation
    16/02/23 17:49:31 INFO mapreduce.JobSubmitter: number of splits:1
    16/02/23 17:49:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456236806433_0004
    16/02/23 17:49:34 INFO impl.YarnClientImpl: Submitted application application_1456236806433_0004
    16/02/23 17:49:34 INFO mapreduce.Job: The url to track the job: http://hadoopUser:8088/proxy/application_1456236806433_0004/
    16/02/23 17:49:34 INFO mapreduce.Job: Running job: job_1456236806433_0004


请注意,

MapReduce作业未启动。您需要在集群上运行测试wordcount作业。

您发布的日志有一个部分
用于跟踪作业的url:
您可以在下次运行作业时查找url以检查Map Reduce日志。看起来您的连接正在以
INFO-manager.SqlManager:Executing SQL语句:从
departments`as t LIMIT 1`中选择t.*已成功执行。您好Sumit Kumar Ghosh,我已打开链接。我注意到进度是0%。我不明白是什么问题,因为我没有任何错误消息。这是许可问题吗?这次我以hdfs的形式执行了这个命令,结果是一样的。我已经添加了上面链接的图像。