views:

92

answers:

4

Hi everyone,

I am currently stumbled into this problem and I'd love to hear some suggestions from you.

I have a C++ program that uses a precompiled library to make some queries to PostgreSQL database. Now the problem is I want to find out the total (combined) cpu time it takes to do all routines described in the source code of the program and also the time it spends waiting for the database-related activities.

I used the time command in Linux, but it seems that it didn't measure the time the program spent on the database.

And in my situation, it won't be possible for me to recompile the library provided to me, so I don't think things like gprof would work.

Any suggestions?

Thank you.

A: 
kisplit
Hi, thanks for answering. However, I don't think it would work in this case, as it will only measure cpu time in the current process, excluding the time spent in the database doing the query.
roberto
+1  A: 

Use POSIX's times, it measures real, user and system time of a process and its children.

There is an example on the linked Opengroup page: "Timing a Database Lookup"

Peter G.
Thanks! This might be what I need. I will try it on and see if it really gives what I want.
roberto
A: 

Of course you'll get the wall-clock time anyway, but presumably you're trying to get the CPU time.

This is nontrivial when you have subprocesses (or unrelated processes) involved. However, you may want to try to have a more holistic approach to benchmarking.

Measuring the latency of an application is easy enough (just watch the wall-clock) but throughput is generally harder.

To get an idea of how an application behaves under load, you need to put it under load (on production-grade hardware), in a reproducible way.

This normally means hitting it with lots of tasks concurrently, as modern hardware tends to be able to do several things at once. Moreover, if anything in your app ever waits for any external data source (including the hard drive of your own machine potentially), you can get better throughput even on a single core by having multiple requests being served at once.

You may want to look at tools like oprofile, which is designed for profiling, not benchmarking.

MarkR
Thanks! Yes, I might want to look for it later on, but for now I'll stick to the requirement.
roberto
A: 

You can turn on log_statement and log_duration and set log_min_duration_statement=0 in postgresql.conf, run your program, and then analyze Postgres logs using for example PQA.

Tometzky
Yes, that seems to be a good way to do so. However, I prefer to do everything without modifying any conf if possible. So, I'll go for Peter's solution first and will keep this as a last resort. Thank you very much for the suggestion.
roberto