Right now I'm using a variety of regexes to "parse" the data in the mediawiki mark-up into lists/dictionaries, so that elements within the article can be used.
This is hardly the best method, as the number of cases that have to be made are large.
How would one parse an article's mediawiki markup into a variety of python objects so that the data within can be used?
Example being:
- Extract all headlines to a dictionary, hashing it with its section.
- Grab all interwiki links, and
stick them into a list (I know
this can be done from the API but I'd rather only have one API call to
reduce bandwidth use). - Extract all image names and hash them with their sections
A variety of regexes can achieve the above, but I'm finding the number I have to make rather large.
Here's the mediawiki unofficial specification (I don't find their official specification as useful).