Wikipedia stores all the information in the servers and the pages are presented by PHP. Is there a possible way to download and store the wikipedia content without actually crawling through the website. This way I save time and storage space, and later processing of the crawled data.
P.S. I know that the question formulation is bad but hope you understand what I mean.