views:

328

answers:

1

As part of an application that I'm developing for Android I'd like to show the user an edge-detected version of an image they have taken (something similar to the example below).

alt text

To achieve this I've been looking at the Sobel operator and how to implement it in Java. However, many of the examples that I've found make use of objects and methods found in AWT (like this example) that isn't part of Android.

My question is then really, does Android provide any alternatives to the features of AWT that have been used in the above example? If we were to rewrite that example just using the libraries built into Android, how would we go about it?

+1  A: 

since you don't have BufferedImage in Android, you can do all the basic operations yourself:

Bitmap b = ...
width = b.getWidth();
height = b.getHeight();
stride = b.getRowBytes();
for(int x=0;x<b.getWidth();x++)
  for(int y=0;y<b.getHeight();y++)
    {
       int pixel = b.getPixel(x, y);
       // you have the source pixel, now transform it and write to destination 
    }

as you can see, this covers almost everything you need for porting that AWT example. (just change the 'convolvePixel' function)

Reflog
This is great, but Bitmap.getPixel() and Bitmap.setPixel() seem to be really slow for me when I'm doing it pixel by pixel. I thought it would be better to use Bitmap.getPixels() at the beginning to copy the bitmap's values as integers to an int[]. How would I perform convolution on an array of RGB int values rather than the bitmap?
greenie
You are correct, fetching the whole array is faster.To perform convolutions on the array, you just iterate using the same kind of for loops and get pixel value by either using separated R,G and B channels or composing a pixel from RGB array using `Color.rgb(r1,g1,b1)` function.
Reflog