如何找到哪个Java/Scala线程锁定了文件?
简而言之:如何找到哪个Java/Scala线程锁定了文件?,java,scala,apache-spark,hive,Java,Scala,Apache Spark,Hive,简而言之: 如何找到哪个Java/Scala线程锁定了文件? 我知道JVM中的一个类/线程锁定了一个具体的文件(与文件的一个区域重叠),但我不知道如何锁定。当我在断点中停止应用程序时,是否有可能发现哪个类/线程正在执行此操作 下面的代码抛出: Java/Scala如何锁定此文件()?我知道如何使用java.nio.channels锁定文件,但在中没有找到合适的调用 关于我的问题的更多信息: 1.当我使用配置单元在Windows操作系统中运行Spark时,它工作正常,但每次Spark关闭时,它
关于我的问题的更多信息: 1.当我使用配置单元在Windows操作系统中运行Spark时,它工作正常,但每次Spark关闭时,它都无法删除一个临时目录(之前的其他临时目录已被正确删除),并输出以下异常:
2015-12-11 15:04:36 [Thread-13] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
2015-12-11 15:04:36 [Thread-13] INFO o.a.spark.util.ShutdownHookManager - Shutdown hook called
2015-12-11 15:04:36 [Thread-13] INFO o.a.spark.util.ShutdownHookManager - Deleting directory C:\Users\MyUser\AppData\Local\Temp\spark-9d564520-5370-4834-9946-ac5af3954032
2015-12-11 15:04:36 [Thread-13] INFO o.a.spark.util.ShutdownHookManager - Deleting directory C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
2015-12-11 15:04:36 [Thread-13] ERROR o.a.spark.util.ShutdownHookManager - Exception while deleting Spark temp dir: C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
java.io.IOException: Failed to delete: C:\Users\MyUser\AppData\Local\Temp\spark-42b70530-30d2-41dc-aff5-8d01aba38041
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:884) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:63) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:60) [spark-core_2.11-1.5.0.jar:1.5.0]
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78) [scala-library-2.11.6.jar:na]
at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:60) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:264) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at scala.util.Try$.apply(Try.scala:191) [scala-library-2.11.6.jar:na]
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:234) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:216) [spark-core_2.11-1.5.0.jar:1.5.0]
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54) [hadoop-common-2.4.1.jar:na]
我尝试在互联网上进行搜索,但在Spark中发现了一些问题(一个用户尝试做一些补丁,但它不起作用,如果我对这个pull请求进行了正确的卸载和评论)以及一些未回答的问题
问题似乎出在类的deleteRecursive()方法中。我将断点设置为该方法,并将其重写为Java:
public class Test {
public static void deleteRecursively(File file) {
if (file != null) {
try {
if (file.isDirectory()) {
for (File child : listFilesSafely(file)) {
deleteRecursively(child);
}
//ShutdownHookManager.removeShutdownDeleteDir(file)
}
} finally {
if (!file.delete()) {
if (file.exists()) {
throw new RuntimeException("Failed to delete: " + file.getAbsolutePath());
}
}
}
}
}
private static List<File> listFilesSafely(File file) {
if (file.exists()) {
File[] files = file.listFiles();
if (files == null) {
throw new RuntimeException("Failed to list files for dir: " + file);
}
return Arrays.asList(files);
} else {
return Collections.emptyList();
}
}
public static void main(String [] arg) {
deleteRecursively(new File("C:\\Users\\MyUser\\AppData\\Local\\Temp\\spark-9ba0bb0c-1e20-455d-bc1f-86c696661ba3"));
}
公共类测试{
公共静态void递归删除(文件){
如果(文件!=null){
试一试{
if(file.isDirectory()){
for(文件子项:ListFilesSafety(文件)){
递归删除(子级);
}
//ShutdownHookManager.removeShutdownDeleteDir(文件)
}
}最后{
如果(!file.delete()){
if(file.exists()){
抛出新的RuntimeException(“未能删除:+file.getAbsolutePath());
}
}
}
}
}
私有静态列表ListFilesSafety(文件文件){
if(file.exists()){
File[]files=File.listFiles();
if(files==null){
抛出新的RuntimeException(“无法列出目录:+文件的文件”);
}
返回Arrays.asList(文件);
}否则{
返回集合。emptyList();
}
}
公共静态void main(字符串[]arg){
递归删除(新文件(“C:\\Users\\MyUser\\AppData\\Local\\Temp\\spark-9ba0bb0c-1e20-455d-bc1f-86c696661ba3”);
}
当在该方法的断点处停止时,我发现一个被锁定的“C:\Users\MyUser\AppData\Local\Temp\spark-9ba0bb0c-1e20-455d-bc1f-86c696661ba3\metastore\db.lck”文件的线程的JVM和Windows也显示Java锁定了该文件。FileChannel也显示该文件在JVM中被锁定
现在,我必须:
File newFile = new File("newFile.lock");
newFile.createNewFile();
FileLock fileLock = FileChannel.open(Paths.get(newFile.getAbsolutePath()), StandardOpenOption.APPEND).tryLock();
在Thread.threadLocals
中,您可以看到sun.nio.fs.NativeBuffer
类的字段owner
=“../newFile.lock”
因此,您可以尝试以下代码,该代码以threadLocals返回所有线程和所有类,您需要找到哪些线程具有NativeBuffer类或Spark/Hive对象等(在Eclipse或IDEA调试模式下检查此线程的threadLocals后):
我向您提供我所了解到的关于我自己的问题的信息,并给出其他答案(非常感谢),可能是在同样的情况下对某人有所帮助:
- sun.nio.fs.NativeBuffer
- sun.nio.ch.Util$BufferCache
Hive
对象(org.apache.hadoop.Hive.ql.metadata.Hive
,org.apache.hadoop.Hive.metastore.ObjectStore
,org.apache.hadoop.Hive.ql.session.SessionState
,org.apache.hadoop.Hive.ql.metadata.Hive
)当Spark
尝试删除db.lck时,这意味着Spark
在尝试删除Hive的
文件之前根本没有正确关闭Hive
。幸运的是,Linux操作系统
中没有这个问题(可能是Linux
允许删除锁定的文件)这些都是好东西,但问题不在于Java级别有一个锁定的对象,而在于OS级别有一个锁定的文件。@NeilMasson,但它仍然可能是Java处理的,这会有所帮助。当然,只需让探查器查看句柄就容易多了。谢谢!我尝试使用您的代码,但我不确定它是否对我有帮助。
File newFile = new File("newFile.lock");
newFile.createNewFile();
FileLock fileLock = FileChannel.open(Paths.get(newFile.getAbsolutePath()), StandardOpenOption.APPEND).tryLock();
private static String getThreadsLockFile() {
Set<Thread> threads = Thread.getAllStackTraces().keySet();
StringBuilder builder = new StringBuilder();
for (Thread thread : threads) {
builder.append(getThreadsLockFile(thread));
}
return builder.toString();
}
private static String getThreadsLockFile(Thread thread) {
StringBuffer stringBuffer = new StringBuffer();
try {
Field field = thread.getClass().getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(thread);
Field table = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredField("table");
table.setAccessible(true);
Object tbl = table.get(map);
int length = Array.getLength(tbl);
for (int i = 0; i < length; i++) {
try {
Object entry = Array.get(tbl, i);
if (entry != null) {
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
Object value = valueField.get(entry);
if (value != null) {
stringBuffer.append(thread.getName()).append(" : ").append(value.getClass()).
append(" ").append(value).append("\n");
}
}
} catch (Exception exp) {
// skip, do nothing
}
}
} catch (Exception exp) {
// skip, do nothing
}
return stringBuffer.toString();
}
private static String getThreadsLockFile(String fileName) {
Set<Thread> threads = Thread.getAllStackTraces().keySet();
StringBuilder builder = new StringBuilder();
for (Thread thread : threads) {
builder.append(getThreadsLockFile(thread, fileName));
}
return builder.toString();
}
private static String getThreadsLockFile(Thread thread, String fileName) {
StringBuffer stringBuffer = new StringBuffer();
try {
Field field = thread.getClass().getDeclaredField("threadLocals");
field.setAccessible(true);
Object map = field.get(thread);
Field table = Class.forName("java.lang.ThreadLocal$ThreadLocalMap").getDeclaredField("table");
table.setAccessible(true);
Object tbl = table.get(map);
int length = Array.getLength(tbl);
for (int i = 0; i < length; i++) {
try {
Object entry = Array.get(tbl, i);
if (entry != null) {
Field valueField = Class.forName("java.lang.ThreadLocal$ThreadLocalMap$Entry").getDeclaredField("value");
valueField.setAccessible(true);
Object value = valueField.get(entry);
if (value != null) {
int length1 = Array.getLength(value);
for (int j = 0; j < length1; j++) {
try {
Object entry1 = Array.get(value, j);
Field ownerField = Class.forName("sun.nio.fs.NativeBuffer").getDeclaredField("owner");
ownerField.setAccessible(true);
String owner = ownerField.get(entry1).toString();
if (owner.contains(fileName)) {
stringBuffer.append(thread.getName());
}
} catch (Exception exp) {
// skip, do nothing
}
}
}
}
} catch (Exception exp) {
// skip, do nothing
}
}
} catch (Exception exp) {
// skip, do nothing
}
return stringBuffer.toString();
}