views:

190

answers:

3

Hi there, I run a price comparison data engine, and as we are collecting so much data im running into pretty serious performance issues. We generate various XML files, one per product and within the product data is each Online shop we grab data from, with their price, link, description, etc.

We have multiple feed parsers/scrapers which collect the price information for each product. The product data is uploaded to a MySQL db, then a PHP file sits on the server and generates the XML for every product.

The problem we are running into, is that for 10,000 products, the XML generation is taking almost 25 minutes! The DB is completely normalised and i am producing the XML via PHP Dom.

The XML generation process doesnt take into consideration whether any of the data has actually changed and this is the problem i am facing. What is the most efficient way of skipping generation of XML files which do not have any data changes?

Do i use a flag system? But doesnt this result in more db look ups which may increase the the db overheads? The current queries only take ~0.1 seconds per product.

Also, what happens if only 1 price for 1 shop changes within an XML file, it seems a waste to write the entire file again because of this, but surely a preg_replace would be just as time consuming?

Thanks for you time, really appreciated!

+3  A: 

When an entry is posted into your database MD5 hash the contents into another field. Then when you poll for an update compare the MD5 from the database to a hash of the file on the server. If they match don't do anything and if they differ then do your update information.

Whenever I can I make the filename on the server the MD5 hash so I have to do even less server work--I just compare the filename to the DB hash.

As for the internal updating you probably will need to use some sort of REGEX, but you will be doing the replacement less often since you will know when something changes in the file.

One other thing. In doing quite a bit of flat file caching I have benchmarked a few different ways of storing the data and it looks like it is almost always faster to gzencode() the files before storage and then decode them when you need to read them. It saves server space and has been faster in my benchmarks (do your own though since hardware and storage needs differ)

EDIT:

In re-reading your post it sounds like you would be hashing the data from your scrapers to compare to the DB. Still the same basic idea but I wanted to clarify that I think it would still work. Your query overhead should be lite still since you would only be pulling 32 characters from the DB in a very specific query--with indexes set correctly it should be VERY fast.

Also, though I have never used it--look into something like simplexml that is native in PHP--that may give you a quick and easy way to change data in well formed XML without having to use REGEX and write it yourself.

angryCodeMonkey
Hi Shane, thanks for the reply - this has given me an idea. As the DB is fully normalised, I do have some fast, indexed tables which could be ideal for this. One of the tables is used for the product id / shop id relationship. It seems an ideal location to include 2 more fields here. 1) md5, 2) update (y/n). When the scraper grabs the new info and md5's it, it compares it to the stored md5 and if different, updates the md5 and update flag to 'y'.The PHP XML generator could then poll each product id, and if no 'y' for each shop, skip the file write. If all 'y' write the whole file.
Peter John
@Peter John - Absolutely! That way you can run your processes in cron, even from a different server, and update only the necessary items as frequently as needed. Sounds like a plan! Let me know how this works out for you...
angryCodeMonkey
Seems a great way forward! I have some benchmarking within my scripts, so i will definitely let you know how it goes. Looking forward to some big improvements. Cheers
Peter John
Hi, i have an update for you! I have made some major changes to the architecture over the weekend and have put in place the hashing updates, with a new simplexml/dom xml updater. The results have been fantastic!25 minutes of porcessing is now down to an average of 12 seconds! Thanks for you help on this, i have a relaxed server now.
Peter John
@Peter John - Man that is great! I am glad to hear that it had such a drastic effect on your server load. I think re-jiggering a script to run 125 times faster could be called a productive weekend.
angryCodeMonkey
A: 

A preg_replace is going to be much worse. You might want to move away from DOMDocument to to SimpleXML which i think has less overhead, but at the same time if you need to remove nodes then you have to bring DOMDocument into the mix in order to preserve your sanity.

I also second Shane's suggestion about comparing hashes from the scraped data to the db data. It seems like a good way to weed out the changes and then you can process with the DOM library of your choice.

prodigitalson
Ok great, thanks for your reply - i will have a play with both DOM and SimpleXML and see what works out the quickest.
Peter John
A: 

10000 files written in 25mins is about 6 files per second. Even though your HD may support xGB/sec, you cannot write X gigs of data in a second in multiple files, there is overhead involved in creating a new file in the FAT index.

Imho, the core issue is you're dealing with static files, which is a poor choice in regards to your performance. The smartest solution is to stop using these static files as they obviously don't perform as well as database queries. If something is directly parsing these files, perhaps you should look into using MOD_REWRITE for Apache and instead of writing actual XML files, have the url run a live database query and output the file on demand. This way you don't have to manually generate all the XML files.

But if you continue with this sub-optimal method, you will have to create a separate dedicated server/storage for this. By chance, you're not housing the database and the web server on the same box? If so, you have to separate them. You might need a separate server or NAS to store these XML files, probably in a high performance raid 0 setup.

In summary, I highly doubt your database is the bottleneck, it's the act of saving all these tiny files.

TravisO
Thanks TravisO, the thing is, i independantly update each product info via js on the front end. A user can view multiple products at the same time, and the auto data fetch is twice a minute. 10 products at once, via 2 requests per minute = 20 requests * 1000 users = 20000 requests. I couldnt have this hit the db, i think it would blow up! I cant memcache the db, as stock levels change all the time. This is why i chose the flat file route, i hope this helps explain things a bit better.
Peter John
Then you're going to have to make improvements on the speed of writing these files, which means a separate Raid 0 storage, or upgrade your current server's HD.
TravisO
A database is nothing but a highly optimized set of flat files (of course there is memcache, but he stated that can't be used), so wouldn't your IO bottleneck be very similar between the DB and flat files? In fact in this case I think he will see improvements by using the filesystem since the MySQL overhead of connection costs will be avoided.
angryCodeMonkey
@Shane I won't disagree with you, as the only way to know for sure is to benchmark both scenarios, it's best not assume speed comparisons since they directly related to your OS, hardware and the version of the platforms you're running.
TravisO