tags:

views:

39

answers:

2

I'm working on a database with mysql 5.0 for an open source project it's used to stored sentences in specific languages and their translations in other languages

I used to have a big table "sentences" and "sentences_translations" (use to join sentences to sentences) table but has we have now near one million entries, this begin to be a bit slow, moreover, most of request are made using a "where lang ="

so I've decided to create a table by language sentences_LANGUAGECODE

and sentences_translation_LANGSOURCE_LANGTARGET

and to create merge table like this sentences_ENG_OTHERS which merge sentences_ENG_ARA sentences_ENG_DEU etc... when we want to have the translations in all languages of an english sentence sentences_OTHERS_ENG when we want to have only the english translations of some sentences

I've created a script to create all these tables (they're around 31 languages so more than 60 merge table), I've tested, that works really great a request which use to take 160ms now take only 30 :) but I discover that all my merge table after the 15th use to have "NULL" as type of storage engine instead of MRG_MYISAM, and if delete one, then I can create an others, using FLUSH table between each creation also allow me to create more merge tables

so is this a limitation from mysql ? can we override it ?

thanks for your answers

A: 

In this thread from the MySQL Merge storage engine forum, it appears that many thousands of tables can be incorporated into a MERGE table. Some people have had problems of running out of file handles, but again this is usually only encountered after a few hundred tables are merged in. I suppose it depends on what OS you are using too - perhaps Windows has lower thresholds.

MySQL Forum for MERGE storage engine

Personally, I would avoid the Merge engine if at all possible. I started using it back in 2002, and have been bitten by numerous bugs across many versions of MySQL. I get the impression that the Merge engine is not a priority for MySQL, and perhaps not very well tested. Partitioning in 5.1 seems more featureful, but for your application I am surprised that you are having performance issues for only a million rows. How effective is your indexing ?

Martin
I will read carefullyI'm running mysql 5.0 on Debian, as it's the one provided by my host, I've perfomance issue as I do the following I need the translations AND translations of translations which are not already a translations, so the request is a bit big, doing just direct translation is really fast.moreover the server is not really powerfull (open source project :p) I will read carefully your link
How about publishing the "explain" of your query ? It might just want a bit of tuning.
Martin
http://pastebin.com/xbVVPTsW (for the request) the explain give the following http://pastebin.com/w4Pypwdq (both are too long to be paste directly here)thanks a lot for your attention
Sorry, I don't think I can improve on that. What you have looks pretty good in the explain. Every join is supported by an index. All I could suggest is doing all the complicated work on sentences_translations and then separately joining the result on sentences to get the text to reduce record reads to the minimum, but even then I doubt there would be much improvement.
Martin
ok thanks anyway :)
A: 

(thanks to Martin to show me the way)

on linux

you have to edit your

/etc/security/limits.conf

and a line like this

* hard nofile 4096

at the next restart it will be ok to have it directly stop your server

ulimit -n 4096 (or more , as root) and restart your server directly

I don't know how about windows yet