views:

74

answers:

2

Hello,
Please tell me how HBase partitions table across regionservers.

For example, let's say my row keys are integers from 0 to 10M and I have 10 regionservers.
Does this mean that first regionserver will store all rows with keys with values 0 - 10M, second 1M - 2M, third 2M-3M , ... tenth 9M - 10M ?

I would like my row key to be timestamp, but I case most queries would apply to latest dates, all queries would be processed by only one regionserver, is it true?

Or maybe this data would be spread differently?
Or maybe can I somehow create more regions than I have region servers, so (according to given example) server 1 would have keys 0 - 0,5M and 3M - 3,5M, this way my data would be spread more equally, is this possible?


update

I just found that there's option hbase.hregion.max.filesize, do you think this will solve my problem?

+1  A: 

WRT partitionning, you can read Lars' blog post on HBase's architecture or Google's Bigtable paper which HBase "clones".

If your row key is only a timestamp, then yes the region with the biggest keys will always be hit with new requests (since a region is only served by a single region server).

Do you want to use timestamps in order to do short scans? If so, consider salting your keys (search google for how Mozilla did it with Sorocco).

Can your prefix the timestamp with any ID? For example, if you only request data for specific users, then prefix the ts with that user ID and it will give you a much better load distribution.

If not, then use UUIDs or anything else that will randomly distribute your keys.

About hbase.hregion.maxfilesize

Setting the maxfilesize on that table (which you can do with the shell), doesn't make it that each region is exactly X MB (where X is the value you set) big. So let's say your row keys are all timestamps, which means that each new row key is bigger than the previous one. This means that it will always be inserted in the region with the empty end key (the last one). At some point, one of the files will grow bigger than maxfilesize (through compactions), and that region will be split around the middle. The lower keys will be in their own region, the higher keys in another one. But since your new row key is always bigger than the previous, this means that you will only write to that new region (and so on).

tl;dr even though you have more than 1,000 regions, with this schema the region with the biggest row keys will always get the writes, which means that the hosting region server will become a bottleneck.

jdcryans
Yes, I want to do scans of timestamps. I'm considering prefixing timestamp with user ID. Can you provide more information about Sorocco? I can't find anything about it. Thanks.
Wojtek
But prefixing timestamp with user ID would make scanning not possible, I think.
Wojtek
Thanks fro the blog link, it's great:)
Wojtek
See this http://blog.mozilla.com/webdev/2010/05/19/socorro-mozilla-crash-reports/ for Sorocco (also there's a link to their github repo with all their code).
jdcryans
A: 

Option hbase.hregion.max.filesize which is by default 256MB sets max region size, after reaching this limit region is split. This means, that my data will be stored in multiple regions of 256MB and possibly one smaller.
So

I would like my row key to be timestamp, but I case most queries would apply to latest dates, all queries would be processed by only one regionserver, is it true?

This is not true, because latest data will be also split in regions of size 256MB and stored on different regionservers.

Wojtek

related questions