Java 仿射变换截断图像,我错了什么?
这里有一个尺寸为2156x1728的黑白png文件,我想使用仿射变换将其旋转90度。结果图像的比例不正确。下面是一些示例代码(假设我已成功地将png文件加载到BuffereImage中): 因此,输出为: 输入宽度:2156 输入高度:1728 结果宽度:1942 最终高度:1942年Java 仿射变换截断图像,我错了什么?,java,awt,affinetransform,Java,Awt,Affinetransform,这里有一个尺寸为2156x1728的黑白png文件,我想使用仿射变换将其旋转90度。结果图像的比例不正确。下面是一些示例代码(假设我已成功地将png文件加载到BuffereImage中): 因此,输出为: 输入宽度:2156 输入高度:1728 结果宽度:1942 最终高度:1942年 为什么旋转会返回如此完全不相关的维度?我不是这方面的专家,但为什么不创建一个正确大小的BuffereImage呢?还要注意,你的旋转中心是不正确的。您需要在[w/2,w/2]或[h/2,h/2]的中心上旋转(w为
为什么旋转会返回如此完全不相关的维度?我不是这方面的专家,但为什么不创建一个正确大小的BuffereImage呢?还要注意,你的旋转中心是不正确的。您需要在[w/2,w/2]或[h/2,h/2]的中心上旋转(w为宽度,h为高度),这取决于您旋转到的象限,1或3,以及图像的相对高度和宽度。例如:
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
public class RotateImage {
public static final String IMAGE_PATH = "http://duke.kenai.com/"
+ "models/Duke3DprogressionSmall.jpg";
public static void main(String[] args) {
try {
URL imageUrl = new URL(IMAGE_PATH);
BufferedImage img0 = ImageIO.read(imageUrl);
ImageIcon icon0 = new ImageIcon(img0);
int numquadrants = 1;
BufferedImage img1 = transform(img0, numquadrants );
ImageIcon icon1 = new ImageIcon(img1);
JOptionPane.showMessageDialog(null, new JLabel(icon0));
JOptionPane.showMessageDialog(null, new JLabel(icon1));
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static BufferedImage transform(BufferedImage image, int numquadrants) {
int w0 = image.getWidth();
int h0 = image.getHeight();
int w1 = w0;
int h1 = h0;
int centerX = w0 / 2;
int centerY = h0 / 2;
if (numquadrants % 2 == 1) {
w1 = h0;
h1 = w0;
}
if (numquadrants % 4 == 1) {
if (w0 > h0) {
centerX = h0 / 2;
centerY = h0 / 2;
} else if (h0 > w0) {
centerX = w0 / 2;
centerY = w0 / 2;
}
// if h0 == w0, then use default
} else if (numquadrants % 4 == 3) {
if (w0 > h0) {
centerX = w0 / 2;
centerY = w0 / 2;
} else if (h0 > w0) {
centerX = h0 / 2;
centerY = h0 / 2;
}
// if h0 == w0, then use default
}
AffineTransform affineTransform = new AffineTransform();
affineTransform.setToQuadrantRotation(numquadrants, centerX, centerY);
AffineTransformOp opRotated = new AffineTransformOp(affineTransform,
AffineTransformOp.TYPE_BILINEAR);
BufferedImage transformedImage = new BufferedImage(w1, h1,
image.getType());
transformedImage = opRotated.filter(image, transformedImage);
return transformedImage;
}
}
编辑1你问: 你能解释一下为什么它必须是[w/2,w/2]或[h/2,h/2]吗 为了更好地解释这一点,最好将矩形可视化并进行物理操作:
剪下一张长方形的纸,放在一张纸上,使其左上角位于纸的左上角,这就是屏幕上的图像。现在,检查需要旋转矩形1或3个象限的位置,使其新的左上角覆盖在纸张的左上角上,您将了解为什么需要使用[w/2,w/2]或[h/2,h/2]。上述解决方案在图像的宽度和高度方面存在问题 下面的代码与w>h | h>w无关
public static BufferedImage rotateImage(BufferedImage image, int quadrants) {
int w0 = image.getWidth();
int h0 = image.getHeight();
int w1 = w0;
int h1 = h0;
int centerX = w0 / 2;
int centerY = h0 / 2;
if (quadrants % 2 == 1) {
w1 = h0;
h1 = w0;
}
if (quadrants % 4 == 1) {
centerX = h0 / 2;
centerY = h0 / 2;
} else if (quadrants % 4 == 3) {
centerX = w0 / 2;
centerY = w0 / 2;
}
AffineTransform affineTransform = new AffineTransform();
affineTransform.setToQuadrantRotation(quadrants, centerX, centerY);
AffineTransformOp opRotated = new AffineTransformOp(affineTransform,
AffineTransformOp.TYPE_BILINEAR);
BufferedImage transformedImage = new BufferedImage(w1, h1,
image.getType());
transformedImage = opRotated.filter(image, transformedImage);
return transformedImage;
}
答案很好,对我帮助很大。但它并不完美。如果图像是矩形的,则生成的旋转图像的一侧可能包含一些额外的黑色像素
我尝试了Marty Feldman的照片,原始照片和结果可以在以下链接中查看:
在黑色背景上很难看到,但在任何图像编辑软件上,都很容易看到结果图像右侧和底部的小黑色边框。这对某些人来说可能不是问题,但如果对你来说是问题,下面是固定的代码(为了便于比较,我保留了原始代码作为注释):
警告:意见未定。我不确定发生这种情况的原因,但我有一个猜测。如果您能更好地解释,请编辑 我相信这个“小故障”的原因是因为奇怪的尺寸。 计算新的
缓冲区图像的尺寸时,273的高度将生成136的中心,例如,当正确的值为136.5时。
这可能会导致旋转发生在稍微偏离中心的位置。但是,通过将null
发送到filter
作为目标图像,“使用源ColorModel
创建buffereImage
”,这似乎效果最好
好的,我试试看,你能解释一下为什么必须是[w/2,w/2]或[h/2,h/2]?我发现你的解决方案可行,但需要一点修正:定义centerX,centerY变量的代码需要切换大于/小于运算符。
public static BufferedImage rotateImage(BufferedImage image, int quadrants) {
int w0 = image.getWidth();
int h0 = image.getHeight();
int w1 = w0;
int h1 = h0;
int centerX = w0 / 2;
int centerY = h0 / 2;
if (quadrants % 2 == 1) {
w1 = h0;
h1 = w0;
}
if (quadrants % 4 == 1) {
centerX = h0 / 2;
centerY = h0 / 2;
} else if (quadrants % 4 == 3) {
centerX = w0 / 2;
centerY = w0 / 2;
}
AffineTransform affineTransform = new AffineTransform();
affineTransform.setToQuadrantRotation(quadrants, centerX, centerY);
AffineTransformOp opRotated = new AffineTransformOp(affineTransform,
AffineTransformOp.TYPE_BILINEAR);
BufferedImage transformedImage = new BufferedImage(w1, h1,
image.getType());
transformedImage = opRotated.filter(image, transformedImage);
return transformedImage;
}
public BufferedImage rotateImage(BufferedImage image, int quadrants) {
int w0 = image.getWidth();
int h0 = image.getHeight();
/* These are not necessary anymore
* int w1 = w0;
* int h1 = h0;
*/
int centerX = w0 / 2;
int centerY = h0 / 2;
/* This is not necessary anymore
* if (quadrants % 2 == 1) {
* w1 = h0;
* h1 = w0;
* }
*/
//System.out.println("Original dimensions: "+w0+", "+h0);
//System.out.println("Rotated dimensions: "+w1+", "+h1);
if (quadrants % 4 == 1) {
centerX = h0 / 2;
centerY = h0 / 2;
} else if (quadrants % 4 == 3) {
centerX = w0 / 2;
centerY = w0 / 2;
}
//System.out.println("CenterX: "+centerX);
//System.out.println("CenterY: "+centerY);
AffineTransform affineTransform = new AffineTransform();
affineTransform.setToQuadrantRotation(quadrants, centerX, centerY);
AffineTransformOp opRotated = new AffineTransformOp(affineTransform,
AffineTransformOp.TYPE_BILINEAR);
/*Old code for comparison
//BufferedImage transformedImage = new BufferedImage(w1, h1,image.getType());
//transformedImage = opRotated.filter(image, transformedImage);
*/
BufferedImage transformedImage = opRotated.filter(image, null);
return transformedImage;
}