Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka Kafka无法删除Windows上的旧日志段_Apache Kafka - Fatal编程技术网

Apache kafka Kafka无法删除Windows上的旧日志段

Apache kafka Kafka无法删除Windows上的旧日志段,apache-kafka,Apache Kafka,我在Windows上遇到了Kafka的问题,它试图删除日志段,但无法删除,因为另一个进程可以访问这些文件。这是由于卡夫卡持有对文件本身的访问权,并试图删除已打开的文件。下面的错误仅供参考 我已经发现了两个JIRA bug,它们已经在这个问题上提交了文件,并且。第一个版本记录在版本0.8.1下,第二个版本记录在版本0.10.1下 我个人尝试过0.10.1和0.10.2版本。他们两个都没有修复错误 我的问题是,是否有人知道修补程序可以解决这个问题,或者知道卡夫卡人是否有一个即将推出的修补程序 谢谢

我在Windows上遇到了Kafka的问题,它试图删除日志段,但无法删除,因为另一个进程可以访问这些文件。这是由于卡夫卡持有对文件本身的访问权,并试图删除已打开的文件。下面的错误仅供参考

我已经发现了两个JIRA bug,它们已经在这个问题上提交了文件,并且。第一个版本记录在版本0.8.1下,第二个版本记录在版本0.10.1下

我个人尝试过0.10.1和0.10.2版本。他们两个都没有修复错误

我的问题是,是否有人知道修补程序可以解决这个问题,或者知道卡夫卡人是否有一个即将推出的修补程序

谢谢

kafka.common.KafkaStorageException: Failed to change the log file suffix from  to .deleted for log segment 6711351
    at kafka.log.LogSegment.kafkaStorageException$1(LogSegment.scala:340)
    at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:342)
    at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:981)
    at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:971)
    at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:673)
    at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:673)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at kafka.log.Log.deleteOldSegments(Log.scala:673)
    at kafka.log.Log.deleteRetentionSizeBreachedSegments(Log.scala:717)
    at kafka.log.Log.deleteOldSegments(Log.scala:697)
    at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:474)
    at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:472)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at kafka.log.LogManager.cleanupLogs(LogManager.scala:472)
    at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:200)
    at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.file.FileSystemException: c:\kafka-logs\kafka-logs\metric-values-0\00000000000006711351.log -> c:\kafka-logs\kafka-logs\metric-values-0\00000000000006711351.log.deleted: The process cannot access the file because it is being used by another process.

    at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
    at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
    at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
    at java.nio.file.Files.move(Files.java:1395)
    at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:711)
    at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:210)
    ... 28 more
    Suppressed: java.nio.file.FileSystemException: c:\kafka-logs\kafka-logs\metric-values-0\00000000000006711351.log -> c:\kafka-logs\kafka-logs\metric-values-0\00000000000006711351.log.deleted: The process cannot access the file because it is being used by another process.

            at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
            at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
            at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
            at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
            at java.nio.file.Files.move(Files.java:1395)
            at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:708)
            ... 29 more

我在本地运行kafka时遇到类似问题,kafka服务器似乎在删除日志文件失败时停止执行。为了避免这种情况发生,我必须增加日志的日志保留,以避免自动删除

#日志文件因使用年限而有资格删除的最短使用年限
log.retention.hours=500

将日志设置为xxx小时可以避免在本地运行时出现这种情况,但对于基于linux的系统,我认为在生产中不应该出现这种情况


如果您需要删除日志文件,请在日志所在位置手动删除它,然后重新启动kafka。

我遇到类似问题,您找到解决方案了吗?