views:

61

answers:

2

Hi all,

I am creating a very simple, very large Postgresql database. The database will have around 10 billion rows, which means I am looking at partitioning it into several tables. However, I can't find any information on how many partitions we should break it into.

I don't know what type of queries to expect as of yet, so it won't be possible to come up with a perfect partitioning scheme, but are there any rules of thumb for partition size?

Cheers,

Adrian.

+1  A: 

This post by Tom Lane suggested that the number of partitions isn't currently designed to scale past a few dozen partitions. The size of the partitions themselves shouldn't effect performance anymore than splitting it up otherwise.

rfusca
+1  A: 

That is about right. Our testing shows that after 50, you are pretty much in useless land. However, we have customers with tables that are a single TB in size. So, two dozen partitions or so should give you a whole lot of scalability.

Joshua D. Drake