In my application I want the user to be able to select some content of an Image contained inside an ImageView
.
To select the content I subclassed the ImageView
class making it implement the OnTouchListener
so to draw over it a rectangle with borders decided by the user.
Here is an example of the result of the drawing (to have an idea of how it works you can think of it as when you click with the mouse on your desktop and drag the mouse):
Now I need to determine which pixels of the Bitmap
image correspond to the selected part. It's kind of easy to determine which are the points of the ImageView
belonging to the rectangle, but I don't know how to get the correspondent pixels, since the ImageView
has a different aspect ratio than the original image.
I followed the approach described especially here, but also here, but am not fully satisfied because in my opinion the correspondence made is 1 on 1 between pixels and points on the ImageView
and does not give me all the correspondent pixels on the original image to the selected area.
Calling hoveredRect
the rectangle on the ImageView
the points inside of it are:
class Point {
float x, y;
@Override
public String toString() {
return x + ", " + y;
}
}
Vector<Point> pointsInRect = new Vector<Point>();
for( int x = hoveredRect.left; x <= hoveredRect.right; x++ ){
for( int y = hoveredRect.top; y <= hoveredRect.bottom; y++ ){
Point pointInRect = new Point();
pointInRect.x = x;
pointInRect.y = y;
pointsInRect.add(pointInRect);
}
}
How can I obtain a Vector<Pixels> pixelsInImage
containing the correspondent pixels of the Bitmap
image?
ADDED EXPLANATIONS
I'll explain a little better the context of my issue:
I need to do some image processing on the selected part, and want to be sure that all the pixels in the rectangle get processed.
The image processing will be done on a server but it needs to know exactly which pixels to process. Server works with image with real dimensions, android app just tells which pixels to process to the server by passing a vector containing the pixel coordinates
And why I don't like the solutions proposed in the links above:
The answers given transform coordinates with a 1 to 1 fashion. This approach clearly is not valid for my task, since an area of say 50 points in the
ImageView
of a certain size on the screen cannot correspond to an area of the same number of pixels in the real image, but should consider the different aspect ratio.
As example this is the area that should be selected if the image is smaller than the ImageView
shown on the app:
Matteo,
It seems this is more a question of how much error you can (subjectively) tolerate in which pixels you send to the server. The fact remains that for any aspect ratio that does not come out to a nice neat integer, you have to decide which direction to 'push' your selection box.
The solutions you linked to are perfectly good solutions. You have to ask yourself: Will the user notice if the image I process is one pixel off from the selection box shown on the screen? My guess is probably not. I can't imagine the user will have that sort of pixel precision anyways when selecting a rectangle with their big fat finger on a touchscreen :D
Since this is the case, I would just let the floor()
-ing that occurs when casting to an integer take care of which pixels you end up passing to the server.
Let's look at an example.
Let's define the width and height of our ImageView and Bitmap to be:
ImageViewWidth = 400, ImageViewHeight = 150
BitmapWidth = 176, BitmapHeight = 65
Then the aspect ratios you will use to convert your selection box between them will be:
WidthRatio = BitmapWidth / ImageViewWidth = 175 / 400 = 0.44
HeightRatio = BitmapHeight / ImageViewHeight = 65 / 150 = 0.44
Some nice ugly numbers. Whatever pixel I am on in the ImageView will correspond to a pixel in the Bitmap like so:
BitmapPixelX = ImageViewPixelX * WidthRatio
BitmapPixelY = ImageViewPixelY * HeightRatio
Now, I put this Bitmap on the screen in my ImageView for the user to select a rectangle, and the user selects a rectangle with top-left and bottom-right coordinates in the ImageView as such:
RectTopLeftX = 271, RectTopLeftY = 19
RectBottomRightX = 313, RectBottomRightY = 42
How do I determine which pixels in the Bitmap these correspond to? Easy. The ratios we determined earlier. Let's look at just the top-left coordinates for now.
RectTopLeftX * WidthRatio = 271 * .44 = 119.24
RectTopLeftY * HeightRatio = 19 * .44 = 8.36
For RectTopLeftX, we find ourselves at a BitmapPixelX value of 119, and then about a quarter of a way into the pixel. Well, if we floor()
this value and the corresponding BitmapPixelY value of 8.36, we will be sending pixel (119, 8)
to the server for processing. If we were to ceil()
these values, we will be sending pixel (120, 9) to the server for processing. This is the part that is entirely up to you.
You will (nearly) always land in some fractional part of a pixel. Whether you send the pixel you land in, or the one next to it is your call. I would say that this is going to be entirely unnoticeable by your user, and so to reiterate, just let the floor()
-ing that occurs when casting to an integer take care of it.
Hope that helps!
Upon reading the question again more slowly, I think I better understand what you are asking/confused about. I will use my example above to illustrate.
You are saying that there are 176 pixels in the Bitmap, and 400 pixels in the ImageView. Therefore, the mapping from one to the other is not 1:1, and this will cause problems when figuring out what pixels to pull out for processing.
But it doesn't! When you convert the coordinates of the rectangle bounds in the ImageView to coordinates in the Bitmap, you're simply giving the range of pixels to iterate over in the Bitmap. It's not a description of how each individual pixel in the ImageView maps to a corresponding pixel in the Bitmap.
I hope that clears up my confusion about your confusion.