tags:

views:

376

answers:

2

Using kernlab I've trained a model with code like the following:

my.model <- ksvm(result ~ f1+f2+f3, data=gold, kernel="vanilladot")

Since it's a linear model, I prefer at run-time to compute the scores as a simple weighted sum of the feature values rather than using the full SVM machinery. How can I convert the model to something like this (some made-up weights here):

> c(.bias=-2.7, f1=0.35, f2=-0.24, f3=2.31)
.bias    f1    f2    f3 
-2.70  0.35 -0.24  2.31

where .bias is the bias term and the rest are feature weights?

EDIT:

Here's some example data.

gold <- structure(list(result = c(-1, -1, -1, -1, -1, -1, -1, -1, -1, 
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), f1 = c(0, 0, 0, 0, 
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 
1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1), f2 = c(13.4138113499447, 
13.2216999857095, 12.964145772169, 13.1975227965938, 13.1031520152764, 
13.59351759447, 13.1031520152764, 13.2700658838026, 12.964145772169, 
13.1975227965938, 12.964145772169, 13.59351759447, 13.59351759447, 
13.0897162110721, 13.364151238365, 12.9483051847806, 12.964145772169, 
12.964145772169, 12.964145772169, 12.9483051847806, 13.0937231331592, 
13.5362700880482, 13.3654209223623, 13.4356400945176, 13.59351759447, 
13.2659406408724, 13.4228886221088, 13.5103065354936, 13.5642812689161, 
13.3224757352068, 13.1779418771704, 13.5601730479315, 13.5457299603578, 
13.3729010596517, 13.4823595997866, 13.0965264603473, 13.2710281801434, 
13.4489887206797, 13.5132372154748, 13.5196188787197), f3 = c(0, 
1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 
0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0)), .Names = c("result", 
"f1", "f2", "f3"), class = "data.frame", row.names = c(NA, 40L
))
+4  A: 

To get the bias, just evaluate the model with a feature vector of all zeros. To get the coefficient of the first feature, evaluate the model with a feature vector with a "1" in the first position, and zeros everywhere else - and then subtract the bias, which you already know. I'm afraid I don't know R syntax, but conceptually you want something like this:

bias = my.model.eval([0, 0, 0])
f1 = my.model.eval([1, 0, 0]) - bias
f2 = my.model.eval([0, 1, 0]) - bias
f3 = my.model.eval([0, 0, 1]) - bias

To test that you did it correctly, you can try something like this:

assert(bias + f1 + f2 + f3 == my.model.eval([1, 1, 1]))
dmazzoni
+3  A: 

If I'm not mistaken, I think you're asking how to extract the W vector of the SVM, where W is defined as:

W = \sum_i y_i * \alpha_i * example_i

Ugh: don't know best way to write equations here, but this just is the sum of the weight * support vectors. After you calculate the W, you can extract the "weight" for the feature you want.

Assuming this is correct, you'd:

  1. Get the indices of your data that are the support vectors
  2. Get their weights (alphas)
  3. Calculate W

kernlab stores the support vector indices and their values in a list (so it works on multiclass problems, too), anyway any use of list manipulation is just to get at the real data (you'll see that the length of the lists returned by alpha and alphaindex are just 1 if you just have a 2-class problem, which I'm assuming you do).

my.model <- ksvm(result ~ f1+f2+f3, data=gold, kernel="vanilladot", type="C-svc")
alpha.idxs <- alphaindex(my.model)[[1]]  # Indices of SVs in original data
alphas <- alpha(my.model)[[1]]
y.sv <- gold$result[alpha.idxs]
# for unscaled data
sv.matrix <- as.matrix(gold[alpha.idxs, c('f1', 'f2', 'f3')])
weight.vector <- (y.sv * alphas) %*% sv.matrix
bias <- b(my.model)

kernlab actually scales your data first before doing its thing. You can get the (scaled) weights like so (where, I guess, the bias should be 0(?))

weight.vector <- (y.sv * alphas) %*% xmatrix(my.model)[[1]]

If I understood your question, this should get you what you're after.

Steve Lianoglou
Ken Williams
Hi. Sorry, can't look into this right now ... I just wanted to make a comment because I fixed a typo or two in the code (btw, it was defaulting to doing regression if you don't explicitly set the type) ... honestly I just think there's some scaling issue going on. Also, if you run it again, the weight of f3 should be 0 using both methods.
Steve Lianoglou
Yeah, I'm sure it's a scaling issue too, but I can't seem to find the mojo to get the scaling right.
Ken Williams