views:

65

answers:

3

I want to break a domain name into constituent words and numbers e.g.

iamadomain11.com = ['i', 'am', 'a', 'domain', '11']

How do i do this? I am aware that there may be multiple sets possible, however, i am currently even ok, just getting 1 set of possibilities.

+1  A: 

This is a fun problem! First you would need a dictionary. For performance reasons, store this in an hashset (probably can use the dictionary type in python). You could then iterate over each possible string, ("i", "ia", "iam",..."n11", "1", "11", "1") and check for matches in the dictionary. Then it's a matter of iterating over these matches until you have a contiguous set with no overlaps.

This would be a quick and dirty. There are probably faster ways to do this.

Evil Pigeon
This would be overly unoptimized and the time taken would be huge!
demos
@demos: *maybe* not when you're trying to break up a string as small as your typical domain name.
Ken Bloom
His method relies on creating all possible n-grams using consecutive letters for a particular word. Are you aware of a large dictionary of english words somewhere?
demos
It would certainly be computationally expensive. You would want to use a dictionary as doing this heuristically would be a nightmare. Storing words in a hashset would make lookups O(1). You also wouldn't have to compute and store ever n-gram. Your implementation could work at matching words in order then rolling back when a solution isn't found. The process might look something like this:"i"->"a"->"mad"-> no word match found in "omain11", rolling back."i"->"am"->"a"->"do"->"main"->"11"
Evil Pigeon
@demos: most UNIX systems have `/usr/share/dict/words` (a plain text file with one word per line) that you can use as a dictionary.
Ken Bloom
+1  A: 

This sounds similar to the problem of tokenising Chinese, where there are no spaces in between words. This paragraph is taken from 'Introduction to Information Retrieval' by Manning, Raghavan & Schütze, available online here:

This phenomenon reaches its limit case with major East Asian Languages (e.g., Chinese, Japanese, Korean, and Thai), where text is written without any spaces between words. [...] One approach here is to perform word segmentation as prior linguistic processing. Methods of word segmentation vary from having a large vocabulary and taking the longest vocabulary match with some heuristics for unknown words to the use of machine learning sequence models, such as hidden Markov models or conditional random fields, trained over hand-segmented words

I would suggest greedy dictionary matching as a first step, then adding heuristics to handle the most common failure cases.

StompChicken
+1  A: 

This is actually solved in the O'Reilly Media book, Beautiful Data. In chapter 14, "Natural Language Corpus Data", he creates a splitter to do exactly as you want in Python using a giant freely available token frequency data set.

Thien