Java FileChannel ByteBuffer和哈希文件
我用java构建了一个文件哈希方法,该方法采用Java FileChannel ByteBuffer和哈希文件,java,file,hash,Java,File,Hash,我用java构建了一个文件哈希方法,该方法采用filepath+filename的输入字符串表示,然后计算该文件的哈希值。散列可以是任何本机支持的java散列算法,例如MD2到SHA-512 我试图找出每一个性能下降的最后一点,因为这个方法是我正在工作的项目的一个组成部分。有人建议我尝试使用FileChannel而不是常规的FileInputStream 我最初的方法是: /** * Gets Hash of file. * * @param file S
filepath+filename
的输入字符串表示,然后计算该文件的哈希值。散列可以是任何本机支持的java散列算法,例如MD2
到SHA-512
我试图找出每一个性能下降的最后一点,因为这个方法是我正在工作的项目的一个组成部分。有人建议我尝试使用FileChannel
而不是常规的FileInputStream
我最初的方法是:
/**
* Gets Hash of file.
*
* @param file String path + filename of file to get hash.
* @param hashAlgo Hash algorithm to use. <br/>
* Supported algorithms are: <br/>
* MD2, MD5 <br/>
* SHA-1 <br/>
* SHA-256, SHA-384, SHA-512
* @return String value of hash. (Variable length dependent on hash algorithm used)
* @throws IOException If file is invalid.
* @throws HashTypeException If no supported or valid hash algorithm was found.
*/
public String getHash(String file, String hashAlgo) throws IOException, HashTypeException {
StringBuffer hexString = null;
try {
MessageDigest md = MessageDigest.getInstance(validateHashType(hashAlgo));
FileInputStream fis = new FileInputStream(file);
byte[] dataBytes = new byte[1024];
int nread = 0;
while ((nread = fis.read(dataBytes)) != -1) {
md.update(dataBytes, 0, nread);
}
fis.close();
byte[] mdbytes = md.digest();
hexString = new StringBuffer();
for (int i = 0; i < mdbytes.length; i++) {
hexString.append(Integer.toHexString((0xFF & mdbytes[i])));
}
return hexString.toString();
} catch (NoSuchAlgorithmException | HashTypeException e) {
throw new HashTypeException("Unsuppored Hash Algorithm.", e);
}
}
因此,我尝试使用我的原始示例和最新更新的示例对2.92GB文件的MD5进行基准测试。当然,任何基准测试都是相对的,因为存在操作系统和磁盘缓存以及其他“魔法”,它们会扭曲对相同文件的重复读取。。。但这里有一些基准测试。我加载了每个方法,并在重新编译后启动了5次。基准测试取自上一次(第5次)运行,因为这将是该算法的“最热”运行,以及任何“魔法”(无论如何,在我的理论中)
这意味着对同一个2.92GB文件进行哈希运算所需的时间减少了25.03%。很好。3个建议:
1) 每次读取后清除缓冲区
while (fc.read(bbf) != -1) {
md.update(bbf.array(), 0, bytes);
bbf.clear();
}
2) 不要同时关闭fc和fis,这是多余的,关闭fis就足够了。FileInputStream.close API显示:
If this stream has an associated channel then the channel is closed as well.
3) 如果您希望通过使用FileChannel提高性能
ByteBuffer.allocateDirect(1024);
下面是一个使用NIO进行文件哈希的示例
- 路径
- 菲伦格尔
- MappedByteBuffer
public static final byte[] getFileHash(final File src, final String hashAlgo) throws IOException, NoSuchAlgorithmException {
final int BUFFER = 32 * 1024;
final Path file = src.toPath();
try(final FileChannel fc = FileChannel.open(file)) {
final long size = fc.size();
final MessageDigest hash = MessageDigest.getInstance(hashAlgo);
long position = 0;
while(position < size) {
final MappedByteBuffer data = fc.map(FileChannel.MapMode.READ_ONLY, 0, Math.min(size, BUFFER));
if(!data.isLoaded()) data.load();
System.out.println("POS:"+position);
hash.update(data);
position += data.limit();
if(position >= size) break;
}
return hash.digest();
}
}
public static final byte[] getCachedFileHash(final File src, final String hashAlgo) throws NoSuchAlgorithmException, FileNotFoundException, IOException{
final Path path = src.toPath();
if(!Files.isReadable(path)) return null;
final UserDefinedFileAttributeView view = Files.getFileAttributeView(path, UserDefinedFileAttributeView.class);
final String name = "user.hash."+hashAlgo;
final ByteBuffer bb = ByteBuffer.allocate(64);
try { view.read(name, bb); return ((ByteBuffer)bb.flip()).array();
} catch(final NoSuchFileException t) { // Not yet calculated
} catch(final Throwable t) { t.printStackTrace(); }
System.out.println("Hash not found calculation");
final byte[] hash = getFileHash(src, hashAlgo);
view.write(name, ByteBuffer.wrap(hash));
return hash;
}
publicstaticfinalbyte[]getFileHash(finalfilesrc,finalstringhashalgo)抛出IOException,nosuchagorithmexception{
最终整数缓冲区=32*1024;
最终路径文件=src.toPath();
try(final FileChannel fc=FileChannel.open(文件)){
最终长尺寸=fc.尺寸();
final MessageDigest hash=MessageDigest.getInstance(hashAlgo);
长位置=0;
while(位置<大小){
final MappedByteBuffer data=fc.map(FileChannel.MapMode.READ_ONLY,0,Math.min(大小,缓冲区));
如果(!data.isLoaded())data.load();
系统输出打印项次(“位置:+位置);
哈希更新(数据);
位置+=数据限制();
如果(位置>=尺寸)断裂;
}
返回hash.digest();
}
}
公共静态最终字节[]getCachedFileHash(最终文件src,最终字符串hashAlgo)抛出NoSuchAlgorithmException、FileNotFoundException、IOException{
最终路径路径=src.toPath();
如果(!Files.isReadable(path))返回null;
最终UserDefinedFileAttributeView视图=Files.getFileAttributeView(路径,UserDefinedFileAttributeView.class);
最后一个字符串name=“user.hash.”+hashAlgo;
最终ByteBuffer bb=ByteBuffer.allocate(64);
尝试{view.read(name,bb);return((ByteBuffer)bb.flip()).array();
}捕获(最终NoSuchFileException t){//尚未计算
}catch(最终可丢弃的t){t.printStackTrace();}
System.out.println(“未找到散列计算”);
最终字节[]hash=getFileHash(src,hashAlgo);
view.write(name,ByteBuffer.wrap(hash));
返回散列;
}
如果代码只分配一次临时缓冲区,另一个可能的改进可能会出现
e、 g
附录
注意:字符串构建代码中有一个bug。它将零打印为一位数。这很容易解决。e、 g
hexString.append(mdbytes[i] == 0 ? "00" : Integer.toHexString((0xFF & mdbytes[i])));
另外,作为一个实验,我重写了代码以使用映射字节缓冲区。它的运行速度大约快30%(6-7毫秒vs.s.9-11毫秒FWIW)。如果您编写直接在字节缓冲区上操作的代码哈希代码,我希望您可以从中获得更多
我试图通过在启动计时器之前用每种算法对不同的文件进行散列来说明JVM初始化和文件系统缓存。第一次运行代码要比正常运行慢25倍左右。这似乎是由于JVM初始化,因为计时循环中的所有运行的长度大致相同。它们似乎没有从缓存中获益。我用MD5算法进行了测试。此外,在定时部分期间,在测试程序的持续时间内仅运行一个算法
循环中的代码更短,因此可能更容易理解。我不确定100%种高压力的内存映射在JVM上会产生什么样的压力,因此如果你想在负载下运行这个解决方案,你可能需要研究和考虑这类问题。p>
public static byte[] hash(File file, String hashAlgo) throws IOException {
FileInputStream inputStream = null;
try {
MessageDigest md = MessageDigest.getInstance(hashAlgo);
inputStream = new FileInputStream(file);
FileChannel channel = inputStream.getChannel();
long length = file.length();
if(length > Integer.MAX_VALUE) {
// you could make this work with some care,
// but this code does not bother.
throw new IOException("File "+file.getAbsolutePath()+" is too large.");
}
ByteBuffer buffer = channel.map(MapMode.READ_ONLY, 0, length);
int bufsize = 1024 * 8;
byte[] temp = new byte[bufsize];
int bytesRead = 0;
while (bytesRead < length) {
int numBytes = (int)length - bytesRead >= bufsize ?
bufsize :
(int)length - bytesRead;
buffer.get(temp, 0, numBytes);
md.update(temp, 0, numBytes);
bytesRead += numBytes;
}
byte[] mdbytes = md.digest();
return mdbytes;
} catch (NoSuchAlgorithmException e) {
throw new IllegalArgumentException("Unsupported Hash Algorithm.", e);
}
finally {
if(inputStream != null) {
inputStream.close();
}
}
}
publicstaticbyte[]hash(文件文件,字符串hashAlgo)抛出IOException{
FileInputStream inputStream=null;
试一试{
MessageDigest md=MessageDigest.getInstance(hashAlgo);
inputStream=新文件inputStream(文件);
FileChannel=inputStream.getChannel();
long length=file.length();
if(长度>整数最大值){
//你可以小心谨慎地做这件事,
//但这段代码并不麻烦。
抛出新IOException(“文件”+File.getAbsolutePath()+”太大。”);
}
ByteBuffer buffer=channel.map(仅MapMode.READ_,0,长度);
int bufsize=1024*8;
字节[]临时=新字节[bufsize];
int字节读取=0;
while(字节读取<长度){
int numBytes=(int)length-bytesRead>=bufsize?
bufsize:
(int)长度-字节读取;
buffer.get(temp,0,numBytes);
md.update(温度,0,单位);
字节读取+=数字节;
}
byte[]mdbytes=md.digest();
返回mdbyt
public static final byte[] getFileHash(final File src, final String hashAlgo) throws IOException, NoSuchAlgorithmException {
final int BUFFER = 32 * 1024;
final Path file = src.toPath();
try(final FileChannel fc = FileChannel.open(file)) {
final long size = fc.size();
final MessageDigest hash = MessageDigest.getInstance(hashAlgo);
long position = 0;
while(position < size) {
final MappedByteBuffer data = fc.map(FileChannel.MapMode.READ_ONLY, 0, Math.min(size, BUFFER));
if(!data.isLoaded()) data.load();
System.out.println("POS:"+position);
hash.update(data);
position += data.limit();
if(position >= size) break;
}
return hash.digest();
}
}
public static final byte[] getCachedFileHash(final File src, final String hashAlgo) throws NoSuchAlgorithmException, FileNotFoundException, IOException{
final Path path = src.toPath();
if(!Files.isReadable(path)) return null;
final UserDefinedFileAttributeView view = Files.getFileAttributeView(path, UserDefinedFileAttributeView.class);
final String name = "user.hash."+hashAlgo;
final ByteBuffer bb = ByteBuffer.allocate(64);
try { view.read(name, bb); return ((ByteBuffer)bb.flip()).array();
} catch(final NoSuchFileException t) { // Not yet calculated
} catch(final Throwable t) { t.printStackTrace(); }
System.out.println("Hash not found calculation");
final byte[] hash = getFileHash(src, hashAlgo);
view.write(name, ByteBuffer.wrap(hash));
return hash;
}
int bufsize = 8192;
ByteBuffer buffer = ByteBuffer.allocateDirect(bufsize);
byte[] temp = new byte[bufsize];
int b = channel.read(buffer);
while (b > 0) {
buffer.flip();
buffer.get(temp, 0, b);
md.update(temp, 0, b);
buffer.clear();
b = channel.read(buffer);
}
hexString.append(mdbytes[i] == 0 ? "00" : Integer.toHexString((0xFF & mdbytes[i])));
public static byte[] hash(File file, String hashAlgo) throws IOException {
FileInputStream inputStream = null;
try {
MessageDigest md = MessageDigest.getInstance(hashAlgo);
inputStream = new FileInputStream(file);
FileChannel channel = inputStream.getChannel();
long length = file.length();
if(length > Integer.MAX_VALUE) {
// you could make this work with some care,
// but this code does not bother.
throw new IOException("File "+file.getAbsolutePath()+" is too large.");
}
ByteBuffer buffer = channel.map(MapMode.READ_ONLY, 0, length);
int bufsize = 1024 * 8;
byte[] temp = new byte[bufsize];
int bytesRead = 0;
while (bytesRead < length) {
int numBytes = (int)length - bytesRead >= bufsize ?
bufsize :
(int)length - bytesRead;
buffer.get(temp, 0, numBytes);
md.update(temp, 0, numBytes);
bytesRead += numBytes;
}
byte[] mdbytes = md.digest();
return mdbytes;
} catch (NoSuchAlgorithmException e) {
throw new IllegalArgumentException("Unsupported Hash Algorithm.", e);
}
finally {
if(inputStream != null) {
inputStream.close();
}
}
}