tags:

views:

1237

answers:

4

Simple C question, how can I correctly and succintly convert milliseconds to seconds. There are two constraints:

  • I've no floating point support in this tiny subset-of-C compiler
  • I need the seconds rounded to the nearest second(1-499ms rounds down,500-999ms rounds up. Don't need to care about negative values)

    int mseconds = 1600; // should be converted to 2 seconds
    int msec = 23487;  // should be converted to 23 seconds
    
+2  A: 
int seconds = msec / 1000;
if (msec % 1000 > 500)
    seconds++;
qrdl
He explicitly said he needed it rounded.
T.J. Crowder
You people are too fast to downvote :) It took me 20 seconds after I posted to realise that rounding is needed and to add rounding part, and by that time I got 2 downvotes :)
qrdl
+22  A: 

This should work

int sec = ((msec + 500) / 1000);
Chi
That is the cleanest way. I usually do that when I have more then 1.6 seconds to think about it ;-)
Guss
This assumes that: you're rounding away from zero (as opposed to nearest even) in the event of a tie, and that you're not round negative numbers...
Rowland Shaw
In general: to do an integer divide with rounding, add half of the dividend before dividing. Rowland's caveats apply.
Laurence Gonsalves
@Rowland: If you expect negative numbers, then its easy to account for that using (msec<0?-1:1)*500 - of course it means more CPU cycles, so only use it if you actually need it. A similar approach can be used to handle "round towards even".
Guss
+1  A: 

Something along these lines?

secs = mseconds / 1000 + (mseconds % 1000 >= 500 ? 1 : 0);
T.J. Crowder
I like Chi's answer better.
T.J. Crowder
Please don't put signatures under your answers. This is not a forum.
OregonGhost
I'm getting the hang of it, cheers.
T.J. Crowder
+2  A: 

At first I did not want to write this answer after the testing on x86, but the testing on sparc Solaris showed it had a performance gain compared with "obvious solution", so maybe it would be useful to someone. I've taken it from a PDF that accompanies the book Hacker's Delight. Here it goes:

unsigned msec2sec(unsigned n) {
  unsigned q, r, t;
  n = n + 500;
  t = (n >> 7) + (n >> 8) + (n >> 12);
  q = (n >> 1) + t + (n >> 15) + (t >> 11) + (t >> 14);
  q = q >> 9;
  r = n - q*1000;
  return q + ((r + 24) >> 10);
}

as opposed to:

unsigned msec2sec_obvious(unsigned n) {
  return (n + 500)/1000;
}

On x86 the "obvious algorithm" translates into adding 500 and then a long multiply by 274877907, followed by grabbing the most significant 32 bits from edx and shifting them 6 bit right - so it beats this code above hands down (~5 times times performance difference).

However, on Solaris/sparc, the "obvious" is transformed into a call to .udiv - which all in all turns out to give a performance difference of ~2.5 times in another direction.

Andrew Y