views:

146

answers:

3

Hello,

In a particular application I was in need of machine learning (I know the things I studied in my undergraduate course). I used Support Vector Machines and got the problem solved. Its working fine.

Now I need to improve the system. Problems here are

  1. I get additional training examples every week. Right now the system starts training freshly with updated examples (old examples + new examples). I want to make it incremental learning. Using previous knowledge (instead of previous examples) with new examples to get new model (knowledge)

  2. Right my training examples has 3 classes. So, every training example is fitted into one of these 3 classes. I want functionality of "Unknown" class. Anything that doesn't fit these 3 classes must be marked as "unknown". But I can't treat "Unknown" as a new class and provide examples for this too.

  3. Assuming, the "unknown" class is implemented. When class is "unknown" the user of the application inputs the what he thinks the class might be. Now, I need to incorporate the user input into the learning. I've no idea about how to do this too. Would it make any difference if the user inputs a new class (i.e.. a class that is not already in the training set)?

Do I need to choose a new algorithm or Support Vector Machines can do this?

PS: I'm using libsvm implementation for SVM.

+1  A: 
  1. There are algorithms to train an SVM incrementally, but I don't think libSVM implements this. I think you should consider whether you really need this feature. I see no problem with your current approach, unless the training process is really too slow. If it is, could you retrain in batches (i.e. after every 100 new examples)?
  2. You can get libSVM to produce probabilities of class membership. I think this can be done for multiclass classification, but I'm not entirely sure about that. You will need to decide some threshold at which the classification is not certain enough and then output 'Unknown'. I suppose something like setting a threshold on the difference between the most likely and second most likely class would achieve this.
  3. I think libSVM scales to any number of new classes. The accuracy of your model may well suffer by adding new classes, however.
StompChicken
I'm not so sure about incremental algorithms. Although I believe incremental algorithms exist, they're few and far between, so finding implementations may be difficult.
Chris S
@Chris S Yes, I agree. It's probably because batched training is usually a simpler and more pragmatic solution.
StompChicken
Playing around with libsvm, it looks like the sum of the probabilities assigned to all classes will always equal 1, so you'll never have a case where an "unknown" sample has low probabilities for all classes. I can't even find how to "trick" it into giving all classes equal probability.
Chris S
+2  A: 

I just wrote my Answer using the same organization as your Question (1., 2., 3).

  1. Can SVMs do this--i.e., incremental learning? Multi-Layer Perceptrons of course can--because the subsequent training instances don't affect the basic network architecture, they'll just cause adjustment in the values of the weight matrices. But SVMs? It seems to me that (in theory) one additional training instance could change the selection of the support vectors. But again, i don't know.

  2. I think you can solve this problem quite easily by configuring LIBSVM in one-against-many--i.e., as a one-class classifier. In my experience using SVMs (and in particular using LIBSVM), and finally based on what i've read, SVMs, though they can certain do multi-class classification, still perform best as one-class classifiers; step-wise application of a one-class classifier can separate your data in more than two classes of course, but the algorithm is trained (and tested) one class at a time. If you do this, then what's left after step-wise execution against the test set, is "unknown"--in other words, whatever data is not classified, is by definition in that 'unknown' class.

  3. Why not make the user's guess a feature (i.e., just another dependent variable)? The only other option is to make it the class label itself, and you don't want that. So you would, for instance, add a column to your data matrix "user class guess", and just populate it with some value most likely to have no effect for those data points not in the 'unknown' category and therefore for which the user will not offer a guess--this value could be '0' or '1', but really it depends on how you have your data scaled and normalized).

doug
+3  A: 

Your first item will likely be the most difficult, since there are essentially no good incremental SVM implementations in existence.

A few months ago, I also researched online or incremental SVM algorithms. Unfortunately, the current state of implementations is quite sparse. All I found was a Matlab example, OnlineSVR (a thesis project only implementing regression support), and SVMHeavy (only binary class support).

I haven't used any of them personally. They all appear to be at the "research toy" stage. I couldn't even get SVMHeavy to compile.

For now, you can probably get away with doing periodic batch training to incorporate updates. I also use LibSVM, and it's quite fast, so it sould be a good substitute until a proper incremental version is implemented.

I also don't think SVM's can model the concept of an "unknown" sample by default. They typically work as a series of boolean classifiers, so a sample ends up as positively being classified as something, even if that sample is drastically different from anything seen previously. A possible workaround would be to model the ranges of your features, and randomly generate samples that exist outside of these ranges, and then add these to your training set.

For example, if you have an attribute called "color", which has a minimum value of 4 and a maximum value of 123, then you could add these to your training set

[({'color':3},'unknown'),({'color':125},'unknown')]

to give your SVM an idea of what an "unknown" color means.

Chris S