views:

1636

answers:

3

Hi

I have this matrix A, representing similarities of pixel intensities of an image. For example: Consider a 10 x 10 image. Matrix A in this case would be of dimension 100 x 100, and element A(i,j) would have a value in the range 0 to 1, representing the similarity of pixel i to j in terms of intensity.

I am using OpenCV for image processing and the development environment is C on Linux.

Objective is to compute the Eigenvectors of matrix A and I have used the following approach:

static CvMat mat, *eigenVec, *eigenVal;
static double A[100][100]={}, Ain1D[10000]={};
int cnt=0;

//Converting matrix A into a one dimensional array
//Reason: That is how cvMat requires it
for(i = 0;i < affnDim;i++){
  for(j = 0;j < affnDim;j++){
 Ain1D[cnt++] = A[i][j];
  }
}

mat = cvMat(100, 100, CV_32FC1, Ain1D); 

cvEigenVV(&mat, eigenVec, eigenVal, 1e-300);

for(i=0;i < 100;i++){
  val1 = cvmGet(eigenVal,i,0); //Fetching Eigen Value

  for(j=0;j < 100;j++){   
 matX[i][j] = cvmGet(eigenVec,i,j); //Fetching each component of Eigenvector i    
  }
}

Problem: After execution I get nearly all components of all the Eigenvectors to be zero. I tried different images and also tried populating A with random values between 0 and 1, but the same result.

Few of the top eigenvalues returned look like the following:

9805401476911479666115491135488.000000  
-9805401476911479666115491135488.000000  
-89222871725331592641813413888.000000  
89222862280598626902522986496.000000  
5255391142666987110400.000000

I am now thinking on the lines of using cvSVD() which performs singular value decomposition of real floating-point matrix and might yield me the eigenvectors. But before that I thought of asking it here. Is there anything absurd in my current approach? Am I using the right API i.e. cvEigenVV() for the right input matrix (my matrix A is a floating point matrix)?

cheers

+6  A: 

I know this post may seem unrelated to the topic, so before you downvote plz see my discussion with the OP in the comments above...

The following is my attempt at implementing the Spectral Clustering algorithm applied to image pixels in MATLAB. I followed exactly the paper mentioned by @Andriyev

SIGMA = 2e-3
NUM_CLUSTERS = 4

%% # Loading and preparing a sample image
%# This reads a simple RGB image, resize it to a smaller size
%# to make things run faster
I0 = im2double( imread('house.png') );
I = imresize(I0, 0.1);
[r c dim] = size(I);

%# reshape as r*c-by-3 (columnwise-order)
I = horzcat( reshape(I(:,:,1),[r*c 1]), ...
             reshape(I(:,:,2),[r*c 1]), ...
             reshape(I(:,:,3),[r*c 1]) );
numPoints = r*c

%% # 1) Compute affinity matrix
%# for each pair of pixels, apply a Gaussian kernel
%# to obtain a measure of similarity
K = exp(-SIGMA * squareform(pdist(I,'euclidean')).^2);

%# and we plot the matrix obtained
imagesc(K), axis xy, colorbar, colormap(hot)

%% # 2) Compute the Laplacian matrix L
D = diag(sum(K,2).^(-0.5));
L = D*K*D;

%% # 3) perform an eigen decomposition of the Laplacian matrix L
[V,D] = eig(L);
D = real( diag(D) );

%# Sort the eigenvalues and the eigenvectors in descending order.
[D order] = sort(D, 'descend');
V = V(:, order);

%# kepp only the first k vectors
NUM_VECTORS = sum(cumsum(D)./sum(D) < 0.99999)+1
V = V(:, 1:NUM_VECTORS);

%% # 4) re-normalize rows of V to unit length
VV = bsxfun(@times, V, 1./sqrt(sum(V.^2,2)));

%% # 5) cluster rows of VV using K-Means
opts = statset('MaxIter', 100, 'Display', 'iter');
[clustIDX, clusters] = kmeans(VV, NUM_CLUSTERS, 'options', opts, ...
             'distance', 'sqEuclidean', 'EmptyAction', 'singleton');

%% # 6) assign pixels to cluster and show the results
%# assign for each pixel the color of the cluster it belongs to
clrs = lines(NUM_CLUSTERS);
J = reshape(clrs(clustIDX, :), [r c 3]);

%# show results
figure, suptitle(sprintf('Clustering into K=%d clusters',NUM_CLUSTERS))
subplot(121), imshow(I0), title('original image')
subplot(122), imshow(J), title({'clustered pixels' '(color-coded classes)'})

... and using a simple house image I drew in Paint, the results were:

laplacian matrix image clustered

and by the way, the first 4 eigenvalues used were:

1.0000
0.0014
0.0004
0.0002

and the corresponding eigenvectors [columns of length r*c=400]:

-0.0500    0.0572   -0.0112   -0.0200
-0.0500    0.0553    0.0275    0.0135
-0.0500    0.0560    0.0130    0.0009
-0.0500    0.0572   -0.0122   -0.0209
-0.0500    0.0570   -0.0101   -0.0191
-0.0500    0.0562   -0.0094   -0.0184
......

Note that there are step performed above which you didn't mention in your question (Laplacian matrix, and normalizing its rows)

Amro
That looks awesome. Yeah, I just skipped the Laplacian and normalization steps from my question just to keep it to the point. Well, now may be I have a good reason to learn MATLAB. Thanks for the guidance and effort on your part.
Andriyev
The truth is I was reading on the subject of kernel PCA myself which is very similar, and I found this an opportunity to understand it better by coding it.. and that's one of the things I love about MATLAB; you can implement an algorithm like this rather quickly and in only a few lines! (compared to coding in C)
Amro
+1  A: 

I would recommend this article. The author implements Eigenfaces for face recognition. On page 4 you can see that he uses cvCalcEigenObjects to generate the eigenvectors from an image. In the article the whole pre processing step necessary for this computations are shown.

Janusz
+1  A: 

Hi

Here's a not very helpful answer:

What does theory (or maths scribbled on a piece of paper) tell you the eigenvectors ought to be ? Approximately.

What does another library tell you the eigenvectors ought to be ? Ideally what does a system such as Mathematica or Maple (which can be persuaded to compute to arbitrary precision) tell you the eigenvectors ought to be ? If not for a production-sixed problem at least for a test-sized problem.

I'm not an expert with image processing so I can't be much more helpful, but I spend a lot of time with scientists and experience has taught me that a lot of tears and anger can be avoided by doing some maths first and forming an expectation of what results you ought to get before wondering why you got 0s all over the place. Sure it might be an error in the implementation of an algorithm, it might be loss of precision or some other numerical problem. But you don't know and shouldn't follow up those lines of inquiry yet.

Regards

Mark

High Performance Mark