views:

37

answers:

1

HI there, I am looking for best practice or ideas for cleaning tags or at least grabbing the data from within custom tags in a text.

I am sure I can code some sort of "parser" that will go through every line manually, but isnt there some smartere way today?

Data thoughts:

{Phone:555-123456789}

here we have "phone" being the key and the number as the data. Looks a lot like JSON format but its easier to write for a human.

or

{link:   article123456  ;    title:    Read about article 123456 here   } 

Could be normal (X)HTML too:

<a         href="article123456.html"      >  Read about article 123456 here  </a>

Humans aren't always nice to "trim" their input and neither are old websites made with lazy WYSIWYG editors, so I first need to figure out which pairs belongs together and then after finding the "data within" then trim the results.

Problem is in the "title" part above, that there are no " " surrounding the title-text, so it could either add them automatically or show the error to the human.

Any thoughts on how to grab these data the best way? There seems to be several ways that might work, but whats your best approach to this problem?

+1  A: 

I would first write a "tokenizer" for the syntax of the data I was parsing. A tokenizer is a (relatively) simple process that breaks a string down into a series of fragments, or tokens. For example, in your first two cases your basic tokens would consist of: "{", "}", ":", ";", and everything else would be interpreted as a data token. This can be done with a loop, a recursive function, or a number of other ways. Tokenizing your second example would produce an array (or some other sort of list) with the following values:

"{", "link", ":", "   article123456  ", ";", "    title", ":", "    Read about article 123456 here   ", "}"

The next step would be to "sanitize" your data, though in these cases all that really means is removing unwanted whitespace. Iterate through the token array that was produced, and alter each token so that there is no beginning or ending whitespace. This step could be combined with tokenization, but I think it's much cleaner and clearer to do it separately. Your tokens would then look like this:

"{", "link", ":", "article123456", ";", "title", ":", "Read about article 123456 here", "}"

And finally, the actual "interpretation." You'll need to convert your token array into whatever sort of actual data structure that you intend to be the final product of the parsing process. For this you'll definitely want a recursive function. If the function is called on a data token, followed by a colon token, followed by a data token, it will interpret them at a key-value pair, and produce a data structure accordingly. If it is called on a series of tokens with semicolon tokens, it will split the tokens up at each semicolon and call itself on each of the resulting groups. And if it is called on tokens contained within curly-brace tokens, it will call itself on the contained tokens before doing anything else. Note that this is not necessarily the order in which you'll want to check for these various cases; in particular, if you intend to nest curly-braces (or any other sort of grouping tokens, such as square brackets, angle brackets, or parentheses), you'll next to make sure to interpret those tokens in the correct nested order.

The result of these processes will be a fully parsed data structure of whatever type you'd like. Keep in mind that this process assumes that your data is all implicitly stored as the string type; if you'd like "3" and 3 to be interpreted differently, then things get a bit more complicated. This method I've outlined is not at all the only way to do it, but it's how I'd approach the problem.

tlayton
Yes exactly, much like a language parser. I wrote some "lite" versions many years ago for SQL parsing and Pascal under education - and yes works indeed. But I was wondering if there are any "tricks" for this with todays modern .NET API or some patterns that has been developed since. Not that I dont agree with you, but I hope to see other ways too - or perhaps some different "tokenizers" :o)
BerggreenDK