The structure you try to create is known as Inverted Index. Here you can find some general information about it and snippets from Heaps and Mills's implementation. Unfortunately, I wasn't able to find it's source, as well as any other efficient implementation. (Please leave comment if you will find any.)
If you haven't a goal to create a library in pure Python, you can use PyLucene - Python extension for accessing Lucene, which is in it's turn very powerful search engine in Java. Lucene implements inverted index and can easily provide you information on word frequency. It also supports wide range of analyzers (parsers + stemmers) for a dozen of languages.
(Also note, that Lucene already has it's own Similarity measure class.)
Some words about similarity and Vector Space Models. It is very powerful abstraction, but your implementation suffers several disadvantages. With a growth of number of documents in your index your co-occurrence matrix will became to big to fit in memory, and searching in it will take a long time. To stop this effect dimension reduction is used. In methods like LSA this is done by Singular Value Decomposition. Also pay attention to such techniques as PLSA, which uses probabilistic theory, and Random Indexing, which is the only incremental (and so the only appropriate for the large indexes) VSM method.