I'm investigating a PostgreSQL-backed application.
CPU usage is constantly over 50% on a modern Xeon with 4GB RAM. Of that 50% CPU utilization, 67% is "user" and 33% is "system" (this is a Linux machine.) The system is not waiting on I/O at all.
I'm wondering how I can see how this CPU time breaks down.
The queries are mostly ad-hoc SQL (no prepared statements) from what I can see.
Do you think this user CPU time could be significantly reduced by moving to prepared statements? i.e. could SQL parse time, query planning time, etc. be taking up this much CPU? Some of the queries are quite chunky (500-1000 characters plus.)
Can anyone confirm if PostgreSQL automatically normalizes ad-hoc queries and cache query plans for them, in effect making them as efficient as a prepared-statement (plus SQL parse time) ?
I will probably implement some higher-level caching to solve this problem, but am curious to know whether anyone thinks it's worth moving this app to prepared statements.