Hadoop “蜂巢执行”;插入到。。。价值观……”;很慢

Hadoop “蜂巢执行”;插入到。。。价值观……”;很慢,hadoop,hive,yarn,tez,Hadoop,Hive,Yarn,Tez,我构建了一个hadoop&hive集群,并尝试进行一些测试。但是它真的很慢 表格 表值\u计数 +--------------------------------------------------------------+--+ | createtab_stmt | +--------------------------------------------------------------+--+ |

我构建了一个hadoop&hive集群,并尝试进行一些测试。但是它真的很慢

表格

表值\u计数

+--------------------------------------------------------------+--+
|                        createtab_stmt                        |
+--------------------------------------------------------------+--+
| CREATE TABLE `value_count`(                                  |
|   `key` int,                                                 |
|   `count` int,                                               |
|   `create_date` date COMMENT '????')                         |
| COMMENT 'This is a group table'                              |
| ROW FORMAT SERDE                                             |
|   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'                |
| STORED AS INPUTFORMAT                                        |
|   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'          |
| OUTPUTFORMAT                                                 |
|   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'         |
| LOCATION                                                     |
|   'hdfs://avatarcluster/hive/warehouse/test.db/value_count'  |
| TBLPROPERTIES (                                              |
|   'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}',      |
|   'numFiles'='7',                                            |
|   'numRows'='7',                                             |
|   'rawDataSize'='448',                                       |
|   'totalSize'='2297',                                        |
|   'transient_lastDdlTime'='1496217645')                      |
+--------------------------------------------------------------+--+
sql执行

insert into value_count values (5, 1, '2017-05-06');
我已经多次执行此sql,每次大约需要4或5分钟

hadoop容器日志

2017-05-31 16:00:45,041 [INFO] [Dispatcher thread {Central}] |app.DAGAppMaster|: Central Dispatcher queue size after DAG completion, before cleanup: 0
2017-05-31 16:00:45,041 [INFO] [Dispatcher thread {Central}] |app.DAGAppMaster|: Waiting for next DAG to be submitted.
2017-05-31 16:00:45,042 [INFO] [Dispatcher thread {Central}] |app.DAGAppMaster|: Cleaning up DAG: name=insert into value_count valu...'2017-05-06')(Stage-1), with id=dag_1490688643958_53401_1
2017-05-31 16:00:45,042 [INFO] [Dispatcher thread {Central}] |container.AMContainerMap|: Cleaned up completed containers on dagComplete. Removed=0, Remaining=1
2017-05-31 16:00:45,044 [INFO] [Dispatcher thread {Central}] |app.DAGAppMaster|: Completed cleanup for DAG: name=insert into value_count valu...'2017-05-06')(Stage-1), with id=dag_1490688643958_53401_1
2017-05-31 16:00:50,749 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: No taskRequests. Container's idle timeout delay expired or is new. Releasing container, containerId=container_1490688643958_53401_01_000002, containerExpiryTime=1496217650577, idleTimeout=5000, taskRequestsCount=0, heldContainers=1, delayedContainers=0, isNew=false
2017-05-31 16:00:50,752 [INFO] [Dispatcher thread {Central}] |history.HistoryEventHandler|: [HISTORY][DAG:dag_1490688643958_53401_1][Event:CONTAINER_STOPPED]: containerId=container_1490688643958_53401_01_000002, stoppedTime=1496217650751, exitStatus=0
2017-05-31 16:00:50,753 [INFO] [ContainerLauncher #1] |launcher.TezContainerLauncherImpl|: Stopping container_1490688643958_53401_01_000002
2017-05-31 16:00:50,753 [INFO] [ContainerLauncher #1] |impl.ContainerManagementProtocolProxy|: Opening proxy : app08.hp.sp.tst.bmsre.com:51640
2017-05-31 16:00:51,628 [INFO] [Dispatcher thread {Central}] |container.AMContainerImpl|: Container container_1490688643958_53401_01_000002 exited with diagnostics set to Container failed, exitCode=-105. Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2017-05-31 16:01:29,678 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 51 lastPreemptionHeartbeat: 50
2017-05-31 16:02:19,740 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 101 lastPreemptionHeartbeat: 100
2017-05-31 16:03:09,801 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 151 lastPreemptionHeartbeat: 150
2017-05-31 16:03:59,858 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 201 lastPreemptionHeartbeat: 200
2017-05-31 16:04:49,915 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 251 lastPreemptionHeartbeat: 250
2017-05-31 16:05:39,971 [INFO] [AMRM Callback Handler Thread] |rm.YarnTaskSchedulerService|: Allocated: <memory:0, vCores:0> Free: <memory:6144, vCores:1> pendingRequests: 0 delayedContainers: 0 heartbeats: 301 lastPreemptionHeartbeat: 300
2017-05-31 16:06:09,581 [INFO] [DAGSubmissionTimer] |rm.TaskSchedulerManager|: TaskScheduler notified that it should unregister from RM
2017-05-31 16:06:09,581 [INFO] [DAGSubmissionTimer] |app.DAGAppMaster|: No current running DAG, shutting down the AM
2017-05-31 16:06:09,581 [INFO] [DAGSubmissionTimer] |app.DAGAppMaster|: DAGAppMasterShutdownHandler invoked
2017-05-31 16:06:09,581 [INFO] [DAGSubmissionTimer] |app.DAGAppMaster|: Handling DAGAppMaster shutdown
2017-05-31 16:06:09,582 [INFO] [AMShutdownThread] |app.DAGAppMaster|: Sleeping for 5 seconds before shutting down
2017-05-31 16:06:14,582 [INFO] [AMShutdownThread] |app.DAGAppMaster|: Calling stop for all the services
2017-05-31 16:06:14,582 [INFO] [AMShutdownThread] |rm.YarnTaskSchedulerService|: Initiating stop of YarnTaskScheduler
2017-05-31 16:06:14,582 [INFO] [AMShutdownThread] |rm.YarnTaskSchedulerService|: Releasing held containers
2017-05-31 16:06:14,583 [INFO] [AMShutdownThread] |rm.YarnTaskSchedulerService|: Removing all pending taskRequests
2017-05-31 16:06:14,583 [INFO] [AMShutdownThread] |history.HistoryEventHandler|: Stopping HistoryEventHandler
2017-05-31 16:06:14,583 [INFO] [AMShutdownThread] |recovery.RecoveryService|: Stopping RecoveryService
2017-05-31 16:06:14,583 [INFO] [AMShutdownThread] |recovery.RecoveryService|: Handle the remaining events in queue, queue size=0
2017-05-31 16:06:14,584 [INFO] [RecoveryEventHandlingThread] |recovery.RecoveryService|: EventQueue take interrupted. Returning
2017-05-31 16:06:14,584 [INFO] [AMShutdownThread] |recovery.RecoveryService|: Closing Summary Stream
2017-05-31 16:06:14,611 [INFO] [AMShutdownThread] |impl.SimpleHistoryLoggingService|: Stopping SimpleHistoryLoggingService, eventQueueBacklog=0
2017-05-31 16:06:14,611 [INFO] [HistoryEventHandlingThread] |impl.SimpleHistoryLoggingService|: EventQueue take interrupted. Returning
2017-05-31 16:06:14,613 [INFO] [DelayedContainerManager] |rm.YarnTaskSchedulerService|: AllocatedContainerManager Thread interrupted
2017-05-31 16:06:14,615 [INFO] [AMShutdownThread] |rm.YarnTaskSchedulerService|: Unregistering application from RM, exitStatus=SUCCEEDED, exitMessage=Session stats:submittedDAGs=0, successfulDAGs=1, failedDAGs=0, killedDAGs=0
, trackingURL=
2017-05-31 16:06:14,620 [INFO] [AMShutdownThread] |impl.AMRMClientImpl|: Waiting for application to be successfully unregistered.
2017-05-31 16:06:14,720 [INFO] [AMShutdownThread] |rm.YarnTaskSchedulerService|: Successfully unregistered application from RM
2017-05-31 16:06:14,721 [INFO] [AMShutdownThread] |rm.TaskSchedulerManager|: Shutting down AppCallbackExecutor
2017-05-31 16:06:14,721 [INFO] [AMRM Callback Handler Thread] |impl.AMRMClientAsyncImpl|: Interrupted while waiting for queue
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:274)
2017-05-31 16:06:14,726 [INFO] [AMShutdownThread] |mortbay.log|: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0
2017-05-31 16:06:14,826 [INFO] [AMShutdownThread] |ipc.Server|: Stopping server on 49703
2017-05-31 16:06:14,827 [INFO] [IPC Server listener on 49703] |ipc.Server|: Stopping IPC Server listener on 49703
2017-05-31 16:06:14,827 [INFO] [AMShutdownThread] |ipc.Server|: Stopping server on 43709
2017-05-31 16:06:14,827 [INFO] [IPC Server Responder] |ipc.Server|: Stopping IPC Server Responder
2017-05-31 16:06:14,827 [INFO] [IPC Server listener on 43709] |ipc.Server|: Stopping IPC Server listener on 43709
2017-05-31 16:06:14,827 [INFO] [IPC Server Responder] |ipc.Server|: Stopping IPC Server Responder
2017-05-31 16:06:14,830 [INFO] [Thread-2] |app.DAGAppMaster|: DAGAppMasterShutdownHook invoked
2017-05-31 16:06:14,830 [INFO] [Thread-2] |app.DAGAppMaster|: The shutdown handler is still running, waiting for it to complete
2017-05-31 16:06:14,844 [INFO] [AMShutdownThread] |app.DAGAppMaster|: Completed deletion of tez scratch data dir, path=hdfs://avatarcluster/tmp/hive/hadoop/_tez_session_dir/46c45420-9bdf-40a5-83a5-c8d1d496abb8/.tez/application_1490688643958_53401
2017-05-31 16:06:14,844 [INFO] [AMShutdownThread] |app.DAGAppMaster|: Exiting DAGAppMaster..GoodBye!
2017-05-31 16:06:14,844 [INFO] [Thread-2] |app.DAGAppMaster|: The shutdown handler has completed
虽然app05/08/09/10是我的测试版机器,但每个都有32个vcore和48GB ram

hadoop配置

insert into value_count values (5, 1, '2017-05-06');
core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://avatarcluster</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoop-data/</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>app02.hp.sp.tst.bmsre.com:2181</value>
    </property>
<property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
</configuration> 

fs.defaultFS
hdfs://avatarcluster
io.file.buffer.size
131072
hadoop.proxyuser.hadoop.hosts
*
hadoop.proxyuser.hadoop.groups
*
hadoop.tmp.dir
/home/hadoop/hadoop数据/
哈。动物园管理员。法定人数
app02.hp.sp.tst.bmsre.com:2181
io.compression.codec
org.apache.hadoop.io.compress.SnappyCodec
hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>avatarcluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.avatarcluster</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.avatarcluster.nn1</name>
        <value>app05.hp.sp.tst.bmsre.com:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.avatarcluster.nn2</name>
        <value>app10.hp.sp.tst.bmsre.com:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.avatarcluster.nn1</name>
        <value>app05.hp.sp.tst.bmsre.com:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.avatarcluster.nn2</name>
        <value>app10.hp.sp.tst.bmsre.com:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://app05.hp.sp.tst.bmsre.com:8485;app10.hp.sp.tst.bmsre.com:8485;app08.hp.sp.tst.bmsre.com:8485/avatarcluster
        </value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.avatarcluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/hadoop/journal-data</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/hadoop/namenode</value>
    </property>
    <property>
        <name>dfs.blocksize</name>
        <value>134217728</value>
    </property>
    <property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

dfs.nameservices
虚拟集群
dfs.ha.namenodes.avatarcluster
nn1,nn2
dfs.namenode.rpc-address.avatarcluster.nn1
app05.hp.sp.tst.bmsre.com:8020
dfs.namenode.rpc-address.avatarcluster.nn2
app10.hp.sp.tst.bmsre.com:8020
dfs.namenode.http-address.avatarcluster.nn1
app05.hp.sp.tst.bmsre.com:50070
dfs.namenode.http-address.avatarcluster.nn2
app10.hp.sp.tst.bmsre.com:50070
dfs.namenode.shared.edits.dir
qjournal://app05.hp.sp.tst.bmsre.com:8485;app10.hp.sp.tst.bmsre.com:8485;app08.hp.sp.tst.bmsre.com:8485/avatarcluster
dfs.client.failover.proxy.provider.avatarcluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
dfs.ha.fensing.methods
sshfence
dfs.ha.fenging.ssh.private-key-files
/home/hadoop/.ssh/id\u rsa
dfs.journalnode.edits.dir
/home/hadoop/hadoop/journal数据
dfs.ha.automatic-failover.enabled
真的
dfs.namenode.name.dir
/home/hadoop/hadoop/namenode
dfs.blocksize
134217728
dfs.namenode.handler.count
100
dfs.replication
2.
maprd-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>app05.hp.sp.tst.bmsre.com:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>app05.hp.sp.tst.bmsre.com:19888</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/mr-history/tmp</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/mr-history/done</value>
    </property>
    <property>
        <name>mapred.output.compress</name>
        <value>true</value>
    </property>
    <property>
        <name>mapred.output.compression.codec</name>
        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
    <property>
        <name>mapred.compress.map.output</name>
        <value>true</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>3048</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>3048</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx2024m</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx2024m</value>
    </property>
</configuration>

mapreduce.framework.name
纱线
mapreduce.jobhistory.address
app05.hp.sp.tst.bmsre.com:10020
mapreduce.jobhistory.webapp.address
app05.hp.sp.tst.bmsre.com:19888
mapreduce.jobhistory.intermediate-done-dir
/mr history/tmp
mapreduce.jobhistory.done-dir
/历史先生/完毕
mapred.output.compress
真的
mapred.output.compression.codec
org.apache.hadoop.io.compress.SnappyCodec
mapred.compress.map.output
真的
mapreduce.map.memory.mb
3048
mapreduce.reduce.memory.mb
3048
mapreduce.map.java.opts
-Xmx2024m
mapreduce.reduce.java.opts
-Xmx2024m
tez-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <property>
        <name>tez.lib.uris</name>
        <value>${fs.defaultFS}/apps/tez-0.8.5.tar.gz</value>
    </property>
    <property>
        <name>tez.am.resource.memory.mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>tez.task.resource.memory.mb</name>
        <value>2048</value>
    </property>
</configuration>

tez.lib.uris
${fs.defaultFS}/apps/tez-0.8.5.tar.gz
tez.am.resource.memory.mb
2048
tez.task.resource.memory.mb
2048
web-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>yarn.admin.acl</name>
        <value>*</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>app05.hp.sp.tst.bmsre.com:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>app05.hp.sp.tst.bmsre.com:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>app05.hp.sp.tst.bmsre.com:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>app05.hp.sp.tst.bmsre.com:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>app05.hp.sp.tst.bmsre.com:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/home/hadoop/hadoop/nodemanager-workdir</value>
    </property>
    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/home/hadoop/hadoop/nodemanager-logs</value>
    </property>
    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>3600</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/home/hadoop/hadoop/nodemanager-remote-app-logs</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
        <value>logs</value>
    </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
    </property>
</configuration>

warn.admin.acl
*
.resourcemanager.address
app05.hp.sp.tst.bmsre.com:8032
warn.resourcemanager.scheduler.address
app05.hp.sp.tst.bmsre.com:8030
warn.resourcemanager.resource-tracker.address
app05.hp.sp.tst.bmsre.com:8031
warn.resourcemanager.admin.address
app05.hp.sp.tst.bmsre.com:8033
warn.resourcemanager.webapp.address
app05.hp.sp.tst.bmsre.com:8088
纱线.nodemanager.local-dirs
/home/hadoop/hadoop/nodemanager workdir
纱线.nodemanager.log-dirs
/home/hadoop/hadoop/nodemanager日志
warn.nodemanager.log.retain-seconds
3600
warn.nodemanager.remote-app-log-dir
/home/hadoop/hadoop/nodemanager远程应用程序日志
纱线.nodemanager.remote-app-log-dir-suffix
日志
纱线.节点管理器.辅助服务
mapreduce_shuffle
warn.nodemanager.aux-services.mapreduce\u shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
warn.nodemanager.resource.memory-mb
4096
warn.scheduler.minimum-allocation-mb
1024

我不确定您的群集配置(节点、内存、CPU),如果您能用这些信息更新帖子,这将非常有用。同时,第一个猜测是您的Tez配置是错误的,因此我建议将MB减少到更小的值(对于微小的数据来说,几MB应该足够了)。另外,尝试使用映射引擎运行相同的命令以限制问题

hive.execution.engine=mr

Hive可以使用ApacheTez执行引擎而不是古老的MapReduce引擎。我不会详细介绍这里提到的使用Tez的许多好处;相反,我想提出一个简单的建议:如果在您的环境中默认情况下没有打开它,请在您的配置单元查询开始时使用Tez,将以下内容设置为“true”:

set hive.execution.engine=tez;

一般来说,当您使用

插入到值中

它倾向于在每次执行语句时创建小文件。由于配置单元没有创建任何约束和索引,因此语句会继续添加小文件。 如果为表启用了ACID属性,它也会尝试在周期性时间进行压缩,尝试将所有小增量文件合并到一个大文件中。这个过程有时可能需要时间
LOAD DATA LOCAL INPATH '/FILE/PATH' INTO TABLE TABLE_NAME ;