views:

153

answers:

1

Hi there,

I would like to implement similarity search in matlab. I wanna to know is it possible ?

My plan is to do use 2 popular similarity measurement which are Euclidean Distance and Dynamic Time Warping. Both of these will be applied on time series dataset. My question at this point is how can I evaluate both of these two measurement performance and accuracy ? I seen some literature saying i should use K-NN algorithm.

Then, I plan to apply dimensionality reduction on the time series dataset. After reducued the dimensionality of the dataset. I will need to index the dataset using R-tree or any indexing techniques available.

However my problem is that to do this, I need R-tree matlab code which I hardly able to find any in the internet ...

I do realised that most of the implementation for similarity search are in C++, C and Java ... But im not familiar with those. Im hoping I could implement these in Matlab ... Any Guru could help me with this ?

Also what kind of evaluation can I make to evaluate the performance for each algorithm.

Thanks

+2  A: 

Recently (R2010a I believe), MATLAB added new functions for k-Nearest Neighbor (kNN) searching using KD-tree (a spatial indexing method similar to R-tree) to the Statistics Toolbox. Example:

load fisheriris                            %# Iris dataset
Q = [6 3 4 1 ; 5 4 3 2];                   %# query points

%# build kd-tree
knnObj = createns(meas, 'NSMethod','kdtree', 'Distance','euclidean');

%# find k=5 Nearest Neighbors to Q
[idx Dist] = knnsearch(knnObj, Q, 'K',5);

Refer to this page for nice description.

Also if you have the Image Processing Toolbox, it contains (for a long time now) an implementation of the kd-tree and kNN searching. They are private functions though:

[matlabroot '\images\images\private\kdtree.m']
[matlabroot '\images\images\private\nnsearch.m']

To compare your two approaches (Dynamic Time Warping and Euclidean distance), you can design a classic problem of classification; given a set of labeled training/testing time series, the task is to predict the label of each test sequenceby finding the most similar ones using kNN then predict the majority class. To evaluate performance, use any of the standard measures of classification like accuracy/error, etc..

Amro
Thanks for your informative answer.I am pretty much a newcomer in this field.Thus please bear with me. 1)Do you recommend me using matlab for similarity search implementation or should I implement using java or c++?2) Is indexing compulsory after dimensionality reduction ? 3)Can you help me by explaining for in detail about evaluation that you proposed?The output of my similarity search will be a set of time series that is similar to the query time series. How do I move on from this point for evaluation ?Considering I wanna apply it on few types of data such as stock data, ECG data.
Jaz
1) I recommend you stick with whatever language you feel more comfortable in. 2) The whole point of indexing is to speed up the search, thus for a large number of time series, we won't have to compare against all of them, instead we get a set of candidates with R-trees. One thing to keep in mind is that for dimensions larger than 20, spatial indexing will degrade to a linear search. 3) Like I explained, you simply take the majority label of the candidate set of time series returned, and compare it against the actual label; if it matches that's a correct classification, otherwise an error.
Amro
Thanks,I decided to just do simple similarity search. I am planning to use range search where any distance computed within error bound ,e will be considered as similar.The problem is how do i determine this value of e ?Any code that explains range search/sequential search ?
Jaz

related questions