views:

110

answers:

4

I want to teach myself enough machine learning so that I can, to begin with, understand enough to put to use available open source ML frameworks that will allow me to do things like:

  1. Go through the HTML source of pages from a certain site and "understand" which sections form the content, which the advertisements and which form the metadata ( neither the content, nor the ads - for eg. - TOC, author bio etc )

  2. Go through the HTML source of pages from disparate sites and "classify" whether the site belongs to a predefined category or not ( list of categories will be supplied beforhand )1.

  3. ... similar classification tasks on text and pages.

As you can see, my immediate requirements are to do with classification on disparate data sources and large amounts of data.

As far as my limited understanding goes, taking the neural net approach will take a lot of training and maintainance than putting SVMs to use?

I understand that SVMs are well suited to ( binary ) classification tasks like mine, and open source framworks like libSVM are fairly mature?

In that case, what subjects and topics does a computer science graduate need to learn right now, so that the above requirements can be solved, putting these frameworks to use?

I would like to stay away from Java, is possible, and I have no language preferences otherwise. I am willing to learn and put in as much effort as I possibly can.

My intent is not to write code from scratch, but, to begin with putting the various frameworks available to use ( I do not know enough to decide which though ), and I should be able to fix things should they go wrong.

Recommendations from you on learning specific portions of statistics and probability theory is nothing unexpected from my side, so say that if required!

I will modify this question if needed, depending on all your suggestions and feedback.

+2  A: 

The most widely used general machine learning library (freely) available is probably WEKA. They have a book that introduces some ML concepts and covers how to use their software. Unfortunately for you, it is written entirely in Java.

I am not really a Python person, but it would surprise me if there aren't also a lot of tools available for it as well.

For text-based classification right now Naive Bayes, Decision Trees (J48 in particular I think), and SVM approaches are giving the best results. However they are each more suited for slightly different applications. Off the top of my head I'm not sure which would suit you the best. With a tool like WEKA you could try all three approaches with some example data without writing a line of code and see for yourself.

I tend to shy away from Neural Networks simply because they can get very very complicated quickly. Then again, I haven't tried a large project with them mostly because they have that reputation in academia.

Probability and statistics knowledge is only required if you are using probabilistic algorithms (like Naive Bayes). SVMs are generally not used in a probabilistic manner.

From the sound of it, you may want to invest in an actual pattern classification textbook or take a class on it in order to find exactly what you are looking for. For custom/non-standard data sets it can be tricky to get good results without having a survey of existing techniques.

adam
WEKA reads like a tool I can use to *have a feeling* of what I might need to use?
PoorLuzer
+1  A: 

It seems to me that you are now entering machine learning field, so I'd really like to suggest to have a look at this book: not only it provides a deep and vast overview on the most common machine learning approaches and algorithms (and their variations) but it also provides a very good set of exercises and scientific paper links. All of this is wrapped in an insightful language starred with a minimal and yet useful compendium about statistics and probability

rano
+3  A: 

"Understanding" in machine learn is the equivalent of having a model. The model can be for example a collection of support vectors, the layout and weights of a neural network, a decision tree, or more. Which of these methods work best really depends on the subject you're learning from and on the quality of your training data.

In your case, learning from a collection of HTML sites, you will like to preprocess the data first, this step is also called "feature extraction". That is, you extract information out of the page you're looking at. This is a difficult step, because it requires domain knowledge and you'll have to extract useful information, or otherwise your classifiers will not be able to make good distinctions. Feature extraction will give you a dataset (a matrix with features for each row) from which you'll be able to create your model.

Generally in machine learning it is advised to also keep a "test set" that you do not train your models with, but that you will use at the end to decide on what is the best method. It is of extreme importance that you keep the test set hidden until the very end of your modeling step! The test data basically gives you a hint on the "generalization error" that your model is making. Any model with enough complexity and learning time tends to learn exactly the information that you train it with. Machine learners say that the model "overfits" the training data. Such overfitted models seem to appear good, but this is just memorization.

While software support for preprocessing data is very sparse and highly domain dependent, as adam mentioned Weka is a good free tool for applying different methods once you have your dataset. I would recommend reading several books. Vladimir Vapnik wrote "The Nature of Statistical Learning Theory", he is the inventor of SVMs. You should get familiar with the process of modeling, so a book on machine learning is definitely very useful. I also hope that some of the terminology might be helpful to you in finding your way around.

DonAndre
A VERY well written answer! Thanks!
PoorLuzer
+2  A: 

Seems like a pretty complicated task to me; step 2, classification, is "easy" but step 1 seems like a structure learning task. Specialized programming/modeling languages have been devised for this kind of problem in the last few years.

For complicated machine learning tasks (several classifiers in one program, constraints, etc.), my tool of choice is currently Learning Based Java. Java, again; sorry. The Natural Language Toolkit for Python also includes a lot of machine learning algorithms and libraries for handling the kind of textual data you're interested in. It's described in a book and includes lots of example data, but my experience with it is that it's kind of slow.

The main lesson that I've learned in my short experience with machine learners: don't overfocus on one method, such as SVMs. Pick a good toolbox that includes several different algorithms. Wisdom in the ML community has it that the amount and quality of your data are far more important than the exact learning algorithms, as long as they (can be bent to) fit your problem. Learn the basics (some probability theory, linear algebra and learning theory) and experiment, trying various algorithms on the same task.

larsmans