Hi I have an application which decides whether a human is handwaving,running or walking. The idea is i have segmented an action,say handwave,to its poses. Let's say
Example;
for human1:pose7-pose3-pose7-..... represents handwave
for human3:pose1-pose7-pose1-..... represents handwave
for human7:pose1-pose1-pose7-..... represents handwave
for human20:pose3-pose7-pose7-..... represents handwave
for human1 pose11-pose33-pose77-..... represents walking
for human2 pose31-pose33-pose77-..... represents walking
for human3 pose11-pose77-pose77-..... represents walking
for human20 pose11-pose33-pose11-..... represents walking
and i used above vectors for training SVM and Neural Net in Matlab..
Now I test with it test images. Again I have segmented poses for test images.
For the vector sizes of test and train sets in MATLAB; SVM and Neural Net requires same vector sizes.
To make it work;
If I append 0 (assume it like pose0
-which is an invalid pose) , to make sizes equal I have really good performance.
If I copy initial poses at the beginning and append them to the end until sizes are equal performance decreases.
For example;
train set: pose1-pose2-pose4-pose7-pose2-pose4-pose7
(1st method)test set: pose3-pose1-pose4-0-0-0-0 or
(2nd method)test set: pose3-pose1-pose4-pose3-pose1-pose4-pose3
I would expect to have better classification with 2nd method since appended values are actual values for poses. But pose0
is not a real pose.
Do you have any ideas ? Regards