views:

42

answers:

1

Wikipedia stores all the information in the servers and the pages are presented by PHP. Is there a possible way to download and store the wikipedia content without actually crawling through the website. This way I save time and storage space, and later processing of the crawled data.

P.S. I know that the question formulation is bad but hope you understand what I mean.

+3  A: 

Yes, you can download various SQL/XML dumps. There are full notes here: Wikipedia:Database download

e100