I'm trying to use a script to process a lot of dta records, let's name it process.php, the problem is that I have a huge data set, to make the job done faster, I want to run multiple instances of this script with
/usr/bin/php process.php start_record end_record &
so I'll have them running in parallel like
/usr/bin/php process.php 0 10000 &
/usr/bin/php process.php 10000 20000 &
/usr/bin/php process.php 20000 30000 &
/usr/bin/php process.php 30000 40000 &
...
I thought this way the job can be done much faster, but after trying I didn't find it much faster, instead the speed seemed to be very close to the linear way(no concurrency). I don't know if it's because process.php is inserting record into a innodb table or what.
Any ideas.