For example using this Wikipedia dump:
Is there an existing library for Python that I can use to create an array with the mapping of subjects and values?
For example:
{height_ft,6},{nationality, American}
For example using this Wikipedia dump:
Is there an existing library for Python that I can use to create an array with the mapping of subjects and values?
For example:
{height_ft,6},{nationality, American}
There's some information on Python and XML libraries here.
If you're asking is there an existing library that's designed to parse Wiki(pedia) XML specifically and match your requirements, this is doubtful. However you can use one of the existing libraries to traverse the DOM and pull out the data you need.
Another option is to write an XSLT stylesheet that does similar and call it using lxml. This also lets you make calls to Python functions from inside the XSLT so you get the best of both worlds.
I would say look at using Beautiful Soup and just get the Wikipedia page in HTML instead of using the API.
I'll try and post an example.
It looks like you really want to be able to parse MediaWiki markup. There is a python library designed for this purpose called mwlib. You can use python's built-in XML packages to extract the page content from the API's response, then pass that content into mwlib's parser to produce an object representation that you can browse and analyse in code to extract the information you want. mwlib is BSD licensed.
You're probably looking for the Pywikipediabot for manipulating the wikipedia API.