Hi
I'm using v2.1 with the built-in python interface. I'm trying to load an image from a file, transform it to lab and get the clusters from the ab plane.
I have a working matlab code but don't know how to do the same in opencv. How do I reshape a jpeg or png images and feed it to kmeans?
Thanks
The error I'm getting:
OpenCV Error: Assertion failed (labels.isContinuous() && labels.type() == CV_32S && (labels.cols == 1 || labels.rows == 1) && labels.cols + labels.rows - 1 == data.rows) in cvKMeans2, file /build/buildd/opencv-2.1.0/src/cxcore/cxmatrix.cpp, line 1202
Traceback (most recent call last):
File "main.py", line 24, in <module>
(cv.CV_TERMCRIT_EPS + cv.CV_TERMCRIT_ITER, 10, 1.0))
cv.error: labels.isContinuous() && labels.type() == CV_32S && (labels.cols == 1 || labels.rows == 1) && labels.cols + labels.rows - 1 == data.rows
Thanks
Working matlab code:
im=imread(fName);
cform = makecform('srgb2lab');
lab_im = applycform(im,cform);
ab = double(lab_im(:,:,2:3));
ab = reshape(ab,nrows*ncols,2);
nColors = 2;
[cluster_idx cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean','Replicates',3,'start', 'uniform');
python-opencv (not working)
img = cv.LoadImage("test.jpg")
clusters = cv.CreateImage((img.width*img.height, 1), img.depth, 1)
lab_img = cv.CreateImage(cv.GetSize(img), img.depth, 3)
cv.CvtColor(img, lab_img, cv.CV_RGB2Lab)
ab_img = cv.CreateImage(cv.GetSize(img), img.depth, 2)
cv.MixChannels([lab_img], [ab_img], [
(1, 0),
(2, 1)
])
cv.Reshape(ab_img, ab_img.channels, ab_img.width*ab_img.height)
cluster_count = 3
cv.KMeans2(ab_img, cluster_count, clusters,
(cv.CV_TERMCRIT_EPS + cv.CV_TERMCRIT_ITER, 10, 1.0))