views:

435

answers:

3

Hi again,

I just finished transferring as much link-structure data concerning wikipedia (English) as I could. Basically, I downloaded a bunch of SQL dumps from wikipedia's latest dump repository. Since I am using PostgreSQL instead of MySQL, I decided to load all these dumps into my db using pipeline shell commands.

Anyway, one of these tables has 295 million rows: the pagelinks table; it contains all intra-wiki hyperlinks. From my laptop, using pgAdmin III, I sent the following command to my database server (another computer):

SELECT pl_namespace, COUNT(*) FROM pagelinks GROUP BY (pl_namespace);

Its been at it for an hour or so now. The thing is that the postmaster seems to be eating up more and more of my very limited HD space. I think it ate about 20 GB as of now. I had previously played around with the postgresql.conf file in order to give it more performance flexibility (i.e. let it use more resources) for it is running with 12 GB of RAM. I think I basically quadrupled most bytes and such related variables of this file thinking it would use more RAM to do its thing.

However, the db does not seem to use much RAM. Using the Linux system monitor, I am able to see that the postmaster is using 1.6 GB of shared memory (RAM). Anyway, I was wondering if you guys could help me better understand what it is doing for it seems that I really do not understand how PostgreSQL uses HD resources.

Concerning the metastructure of wikipedia databases, they provide a good schema that may be of use or even but of interest to you.

Feel free to ask me for more details, thx.

+1  A: 

What exactly is claiming that it's only taking 9.5MB of RAM? That sounds unlikely to me - the shared memory almost certainly is RAM which is being shared between different Postgres processes. (From what I remember, each client ends up as a separate process, although it's been a while so I could be very wrong.)

Do you have an index on the pl_namespace column? If there's an awful lot of distinct results, I could imagine that query being pretty heavy on a 295 million row table with no index. Having said that, 10GB is an awful lot to swallow. Do you know which files it's writing to?

Jon Skeet
Hi, I indeed have an index on the namespace column. Their are only 18 different namespaces values. And you are right! it is using 1.6GB of RAM; it seems that resident memory is only resident to the user! And you are also right about each connection having its own postmaster process.
Nicholas Leonard
Ok. So it seems that it is indeed using up all the RAM I told it that it could use. Yet when does a command require so much HD space? is it creating a temporary file? If so, where would this file be?
Nicholas Leonard
I don't know where it's creating the file - but with appropriate monitoring tools (strace? something better) you should be able to find out. Do you have a lot of logging, for example?
Jon Skeet
humm, I can not answer all of these questions concerning logging and monitoring; but your mentioning of these has made me realize that I should look into it, thx!
Nicholas Leonard
Also, it seems to have stopped using up more HD space...yet it is still executing. Hopefully, I will get my result set before running out of HD space!
Nicholas Leonard
Hi again. I went looking into the $PGDATA dir and found that 23GB were used up in PGDATA/base/16384/pgsql_tmp. So I am guessing that these files will all disappear when the SQL query is done? thx again
Nicholas Leonard
Hmmm... possibly. Not sure. See if you can find an explanation of what files are used for what purpose somewhere in the manual. Possibly reindexing?
Jon Skeet
Ooh - another thing to check: is it the postmaster process associated with your query which is definitely the one writing to disk, or another process?
Jon Skeet
while Vista provided me with a means to find that out, Linux's system monitor doesn't seem to care about IO info. Got any suggestions? And thx for the swift replys
Nicholas Leonard
+1  A: 

It's probably the GROUP BY that's causing the problem. In order to do grouping, the database has to sort the rows to put duplicate items together. An index probably won't help. A back-of-the-envelope calculation:

Assuming each row takes 100 bytes of space, that's 29,500,000,000 bytes, or about 30GB of storage. It can't fit all that in memory, so your system is thrashing, which slows operations down by a factor of 1000 or more. Your HD space may be disappearing into swap space, if it's using swap files.

If you only need to do this calculation once, try breaking it apart into smaller subsets of the data. Assuming pl_namespace is numeric and ranges from 1-295million, try something like this:

SELECT pl_namespace, COUNT(*)
FROM pagelinks
WHERE pl_namespace between 1 and 50000000
GROUP BY (pl_namespace);

Then do the same for 50000001-100000000 and so forth. Combine your answers together using UNION or simply tabulate the results with an external program. Forget what I wrote about an index not helping GROUP BY; here, an index will help the WHERE clause.

Barry Brown
Nice! actually, since the query is executed only for getting a view of the statistical distribution of the *pagelinks.namespaces* in the db, all I really needed was to execute it on 50 million rows once! thx for the knowledge!
Nicholas Leonard
A: 

Ok so here is the gist of it:

the GROUP BY clause made the index' invalid, so the postmaster (postgresql server process) decided to create a bunch of tables (23GB of tables) that were located in the directory $PGDATA/base/16384/pgsql_tmp.

When modifying the postgresql.conf file, I had given permission to postgreSQL to use 1.6 GB of RAM (which I will now double for it has access to 11.7 GB of RAM); the postmaster process was indeed using up 1.6 GB of RAM, but that wasn't enough, thus the pgsql_tmp directory.

As was pointed out by Barry Brown, since I was only executing this SQL command to get some statistical information about the distribution of the links among the pagelinks.namespaces, I could have queried a subset of the 296 million pagelinks (this is what they do for surveys).

When the command returned the result set, all temporary tables were automatically deleted as if nothing had happened.

Thx for your help guys!

Nicholas Leonard
Did the original query end up completing? Did you try the subset query, too?I'm interested to know how much faster the subset query was.
Barry Brown
Yes, the original query finished. As for your subset query, I did not execute it exactly the same. Since the original one completed, I am moving on to the next query that needs to be done, which is more complexe, but it does use the general idea that you pushed to me. Thx
Nicholas Leonard
Sorry I can't say with enough certainty if the subquery actually helped for, like I said, I used this trick of yours in a more complex query and they seem to use up about the same amount of disk space (23GB). Thx
Nicholas Leonard