views:

691

answers:

3

Scenario


I am attempting to implement supervised learning over a data set within a Java GUI application. The user will be given a list of items or 'reports' to inspect and will label them based on a set of available labels. Once the supervised learning is complete, the labelled instances will then be given to a learning algorithm. This will attempt to order the rest of the items on how likely it is the user will want to view them.

To get the most from the user's time I want to pre-select the reports that will provide the most information about the entire collection of reports, and have the user label them. As I understand it, to calculate this, it would be necessary to find the sum of all the mutual information values for each report, and order them by that value. The labelled reports from supervised learning will then be used to form a Bayesian network to find the probability of a binary value for each remaining report.

Example


Here, an artificial example may help to explain, and may clear up confusion when I've undoubtedly used the wrong terminology :-) Consider an example where the application displays news stories to the user. It chooses which news stories to display first based on the user's preference shown. Features of a news story which have a correlation are country of origin, category or date. So if a user labels a single news story as interesting when it came from Scotland, it tells the machine learner that there's an increased chance other news stories from Scotland will be interesting to the user. Similar for a category such as Sport, or a date such as December 12th 2004.

This preference could be calculated by choosing any order for all news stories (e.g. by category, by date) or randomly ordering them, then calculating preference as the user goes along. What I would like to do is to get a kind of "head start" on that ordering by having the user to look at a small number of specific news stories and say if they're interested in them (the supervised learning part). To choose which stories to show the user, I have to consider the entire collection of stories. This is where Mutual Information comes in. For each story I want to know how much it can tell me about all the other stories when it is classified by the user. For example, if there is a large number of stories originating from Scotland, I want to get the user to classify (at least) one of them. Similar for other correlating features such as category or date. The goal is to find examples of reports which, when classified, provide the most information about the other reports.

Problem


Because my math is a bit rusty, and I'm new to machine learning I'm having some trouble converting the definition of Mutual Information to an implementation in Java. Wikipedia describes the equation for Mutual Information as:

mutual information equation

However, I'm unsure if this can actually be used when nothing has been classified, and the learning algorithm has not calculated anything yet.

As in my example, say I had a large number of new, unlabelled instances of this class:

public class NewsStory {
    private String countryOfOrigin;
    private String category;
    private Date date;
    // constructor, etc.
}

In my specific scenario, the correlation between fields/features is based on an exact match so, for instance, one day and 10 years difference in date are equivalent in their inequality.

The factors for correlation (e.g. is date more correlating than category?) are not necessarily equal, but they can be predefined and constant. Does this mean that the result of the function p(x,y) is the predefined value, or am I mixing up terms?

The Question (finally)


How can I go about implementing the mutual information calculation given this (fake) example of news stories? Libraries, javadoc, code examples etc. are all welcome information. Also, if this approach is fundamentally flawed, explaining why that is the case would be just as valuable an answer.


PS. I am aware of libraries such as Weka and Apache Mahout, so just mentioning them is not really useful for me. I'm still searching through documentation and examples for both these libraries looking for stuff on Mutual Information specifically. What would really help me is pointing to resources (code examples, javadoc) where these libraries help with mutual information.

A: 

Try Weka.

Please see my edit, I'm aware of Weka, but have been unable to find resources on what it can do for Information Gain. Could you be more specific? Thanks for your time!
Grundlefleck
Weka documentation is not the best; if you have a library specific question, I suggest you try the mailing list.
+1  A: 

I know information gain only in connection with decision trees (DTs), where in the construction of a DT, the split to make on each node is the one which maximizes information gain. DTs are implemented in Weka, so you could probably use that directly, although I don't know if Weka lets you calculate information gain for any particular split underneath a DT node.

Apart from that, if I understand you correctly, I think what you're trying to do is generally referred to as active learning. There, you first need some initial labeled training data which is fed to your machine learning algorithm. Then you have your classifier label a set of unlabeled instances and return confidence values for each of them. Instances with the lowest confidence values are usually the ones which are most informative, so you show these to a human annotator and have him/her label these manually, add them to your training set, retrain your classifier, and do the whole thing over and over again until your classifier has a high enough accuracy or until some other stopping criterion is met. So if this works for you, you could in principle use any ML-algorithm implemented in Weka or any other ML-framework as long as the algorithm you choose is able to return confidence values (in case of Bayesian approaches this would be just probabilities).


With your edited question I think I'm coming to understand what your aiming at. If what you want is calculating MI, then StompChicken's answer and pseudo code couldn't be much clearer in my view. I also think that MI is not what you want and that you're trying to re-invent the wheel.

Let's recapitulate: you would like to train a classifier which can be updated by the user. This is a classic case for active learning. But for that, you need an initial classifier (you could basically just give the user random data to label but I take it this is not an option) and in order to train your initial classifier, you need at least some small amount of labeled training data for supervised learning. However, all you have are unlabeled data. What can you do with these?

Well, you could cluster them into groups of related instances, using one of the standard clustering algorithms provided by Weka or some specific clustering tool like Cluto. If you now take the x most central instances of each cluster (x depending on the number of clusters and the patience of the user), and have the user label it as interesting or not interesting, you can adopt this label for the other instances of that cluster as well (or at least for the central ones). Voila, now you have training data which you can use to train your initial classifier and kick off the active learning process by updating the classifier each time the user marks a new instance as interesting or not. I think what you're trying to achieve by calculating MI is essentially similar but may be just the wrong carriage for your charge.

Not knowing the details of your scenario, I should think that you may not even need any labeled data at all, except if you're interested in the labels themselves. Just cluster your data once, let the user pick an item interesting to him/her from the central members of all clusters and suggest other items from the selected clusters as perhaps being interesting as well. Also suggest some random instances from other clusters here and there, so that if the user selects one of these, you may assume that the corresponding cluster might generally be interesting, too. If there is a contradiction and a user likes some members of a cluster but not some others of the same one, then you try to re-cluster the data into finer-grained groups which discriminate the good from the bad ones. The re-training step could even be avoided by using hierarchical clustering from the start and traveling down the cluster hierarchy at every contradiction user input causes.

ferdystschenko
"Instances with the lowest confidence values are usually the ones which are most informative" - I don't think this is the case in my scenario. There are a few features for each instance and it is basically the instance which matches the most features from other unlabeled data that is the most informative. Is this such an unusual scenario that it's unlikely to be available in a library?
Grundlefleck
The term 'informative' is meant a little differently here: you can have an instance with all values set but if your classifier already 'knows' which class it belongs to, adding this instance to your training set will not improve accuracy. On the other hand, if an instance is classfied incorrectly, adding it to your training set may give the learner new information - given that there are enough non-zero features, of course, i.e. an instance which couldn't be classfied due to data sparsity, can in principle not provide new information.
ferdystschenko
I think I may have confused the meaning of Training Data - I was using it as the data given to a human user to classify, which I guess is wrong. The result is that I think you've given an answer which applies to the scenario *just after* the bit I'm talking about. I'm looking for what examples to give to the human in the very first time, before a learning algorithm is involved. Cheers!
Grundlefleck
+2  A: 
StompChicken
Ah, I think my poor terminology may have let me down. Say I have a list of 'things', I'll call them 'reports'. A report has several features, but what I'm ultimately interested in is the probability of a true/false value to assign to the report. Is the report then a random variable? I may also have used the term feature incorrectly. If the analogy was that a report was a Java object, would a feature be an instance variable of that object? Hopefully if I get my terminology correct I can make the question better.
Grundlefleck
This was extremely helpful, but as is always the case, it leads me to another question :-p If I calculate mi for every discrete value of each feature, then retrieve the mi value of each feature of a given example, and sum them, would that make sense as an ordering for all the examples? Thanks! PS. I have edited the question which may clarify what I'm meaning some more.
Grundlefleck