tags:

views:

699

answers:

4

I am curious to know How to Convert 64bit Long Data Type to 16bit Data Type. As this feature is required in the Ethernet Application to include Time Stamp. We left with only 2Bytes [16bits] to include Time Stamp. We get 64bit long the Time Stamp from Win API.

Any answer will be highly appreciated.

Thank you.

+5  A: 

Well, you can't fit 64 bits of information into 16 bits of storage without losing some of the information.

So it's up to you how to quantize or truncate the timestamp. E.g. suppose you get the timestamp in nanosecond precision, but you only need to store it at seconds precision. In that case you divide the 64 bit number by 1000000000 and are left with the seconds. Then it might fit into 16 bits or not (16 bits would only store up to 65535 seconds).

If it won't fit, then you'll have the timestamp wrapping around periodically. Which, again, might be a problem in your case or it might be not a problem.

In any case, if you need to interface an existing library that requires timestamps - figure out what it needs in that timestamp (clock ticks? seconds? years?). Then figure out what the Windows times function that you're using returns. Then convert the Windows time unit into the-library-that-you-use time unit.

NeARAZ
+1  A: 

16 bits may or may not be enough, depending on what you need the timestamp for. For most purposes it's way too small or at least inconvenient. But some examples where this might work could be: timeouts, measuring round-trip time for packets, grossly measuring time intervals (which might work alright for displaying time information to users) and so on.

On the other hand, it's probably useless for reordering packets. If this is the case, I'd suggest you replaced the timestamp with a sequence counter. Depending on the typical number of packets in the stream, you might even be able to cut down a few bits and use them for other purposes, since sequence counters can handle wrapping more easily.

Eduard - Gabriel Munteanu
A: 

As the others said, the first problem is deciding on the correct scaling. You've got to balance your resolution with you desired maximum range. One way to think about it is deciding how many seconds per bit you want. With 1 second per bit you can express values from 1 second up to 65536 seconds or ~1000 minutes. 1 millisecond per bit lets you go from 0.001 seconds up to 65.5 seconds

Here's one way to do the conversion.

#define seconds_per_bit   .0001  <--YOUR VALUE HERE.
#define bits_per_second   (1/seconds_per_bit);  
int16 timestamp()
{
  Int64 counts_per_second,counts;

  QueryPerformanceFrequency(&counts_per_sec);
  QueryPerformanceCounter(&counts);  
  return (UInt16)(counts * bits_per_second / counts_per_second);
}
AShelly
A: 

It depends entirely on what you use the timestamp for. You mention ethernet, so one obvious use I could imagine is for ordering packets. And in that case all you really need is a counter. Instead of your timestamp saying "this packet was sent on may 14th at 14:35PM", it can simply say "this is the 4023th packet".

If you need it to record actual clock time, you just have to pick which parts of it is relevant. 16 bits give you 65536 values to play with. Do you want those to represent seconds? Then your timestamps will wrap around every 18 hours.

Or they can be minutes. Then it'll be 45 days before they wrap around. Or days, or microseconds, it all depends on what you need.

But the only way you can convert a 64-bit value into a 16-bit one is to remove 48 bits of data. You choose which ones

jalf