views:

177

answers:

4

Hi,

I am looking to download full Wikipedia text for my college project. Do I have to write my own spider to download this or is there a public dataset of Wikipedia available online?

To just give you some overview of my project, I want to find out the interesting words of few articles I am interested in. But to find these interesting words, I am planning to apply tf/idf to calculate term frequency for each word and pick the ones with high frequency. But to calculate the tf, I need to know the total occurrences in whole of Wikipedia.

Your help would be greatly appreciated.

Thank you
Bala

+7  A: 

from wikipedia: http://en.wikipedia.org/wiki/Wikipedia_database

Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.

Seems that you are in luck too. From the dump section:

As of 12 March 2010, the latest complete dump of the English-language Wikipedia can be found at http://download.wikimedia.org/enwiki/20100130/ This is the first complete dump of the English-language Wikipedia to have been created since 2008. Please note that more recent dumps (such as the 20100312 dump) are incomplete.

So the data is only 9 days old :)

Sam Holder
I upvoted your answer over the others simply because you did more then just post a link.
Unkwntech
I cut and pasted too :)
Sam Holder
@Sam Holder Just want to confirm. Is this the correct link to download all the pages -http://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
Algorist
yeah that seems to be all current pages, and is probably what you want, though without knowing exactly its hard to say for sure.
Sam Holder
+1  A: 

See http://en.wikipedia.org/wiki/Wikipedia_database

maligree
+1  A: 

Considering the size of the dump, you would probably be better served using the word frequency in the English language, or to use the MediaWiki API to poll pages at random (or the most consulted pages). There are frameworks to build bots based on this API (in Ruby, C#, ...) that can help you.

Luk