views:

257

answers:

6

I need to transfer the current operating system's date and time up to micro-seconds accuracy from party A to party B over TCP/IP.

  • What is the least number of bytes that can be used to represent this without sacrificing accuracy?

  • And how to convert the date and time to your bytes format using C#?

+1  A: 

the DateTime.Tick is the binary format representation of datetime

Benny
+8  A: 

The least number of bits would be to use microseconds since some fixed point in time. But when you do this, at some point in the future your time stamps will overflow.

If you pick the fixed point in time as yesterday for instance. then 5 bytes will be enough to encode times from yesterday until the middle of next week.

But this seems unnecessarily complicated. DateTime.Tick uses only 8 bytes of space. it's accurate to 1/10th of a microsecond and spans all of recorded human history, it references a defined point in time (January 1, year 1) and most importantly, its standard

John Knoeller
In order to use DateTime.Tick, I need to choose a fixed point in time right? Is there a standard for choosing this fixed point in time?
Lopper
@Lopper, NO you do not need to choose a fixed point in time. System.DateTime has done that for you.
Portman
What Portmain said. You only need to choose a point in time if you don't use the standard one.
John Knoeller
@Portman, I see! Thanks!
Lopper
Standard? We don't believe in standards - we invent our OWN time at this company!!
Larry Watanabe
I vote for Jan 17, 1984. The date of the first Macintosh Commercial.
John Knoeller
If you really want to pack it down and you can be sure that the date is within some span of "now" (say a few minutes or hours) than something like the first bit should work. What's the best idea depends on how much you care about code complexity vs. bandwidth.
BCS
+1  A: 

Apparently DateTime.Ticks can represent time down to the nanosecond level, from the year 0001. According to the poster above, it requires 8 bytes of space.

However, to get a minimal representation for all the micro seconds from the year 0000 to the year 9999, you can calculate it as follows:

9999 years * 365 days * 24 hrs * 60 min * 60 secs * 1000000 microseconds = 525547400000

ln(above number) / ln(2) = 52.22274 .. -- 53 bits

53 bits = 6.5 bytes.

So, you can save a byte by using a non-standard representation (basically you save by not having to represent nanoseconds).

Writing the conversion routines is simple enough. Just convert each date, time, millisecond to the appropriate millisecond between beginning of year 0000 and year 9999. To get the date time back again, just do the reverse calculation. It's simple enough you should be able to figure it out.

However, I highly doubt saving 1 byte is worth the work, potential bugs, non-standardization, maintenance, documentation, etc. etc.

Larry Watanabe
Minor point, DateTime.Tics is accurate to 100 nanoseconds, or 1/10 of a microsecond. it is not accurate to the nanosecond
John Knoeller
+1  A: 

Answer: an infinite number of bytes.

Reason: see John Knoeller's answer. Basically, if you choose any finite number of bits, then it can only represent a finite number of points in time. Since time extends infinitely into the past and future, it will not be able to represent some (actually an infinite number of ) instances in time.

This would be a good interview question :)

Larry Watanabe
Not necessarily infinite, it's only been about 14 G years since the big bang, and we have no idea how long we'll last. Nevertheless, I gave you +1!
Doug Currie
Time is a continuous variable, so there's an infinity of moments in any timespan.
Rik
@Doug Currie - you're assuming this application is only referencing actual time in history and not an infinite future. This could be an astronomy application attempting to calculate star positions billions and billions of years from now.
Jeff O
+1  A: 

If you are always including microseconds, you might as well use the fixed number of bytes as already discussed - however, if this is general purpose serialization you might want to exploit the fact that a lot of regular values have date or hour values. For protobuf-net, it marks the scale (days, hours, minutes, seconds, milliseconds or ticks) and sends the integer number of [whatever] between the given time and an arbitrary epoch (1/1/1970). To further minimise this, the integer is passed with variable length encoding, allowing most common values to be passed very conveniently. This lets it send things like date values in a minimal number of bytes.

Marc Gravell
A: 

I have to take issue with Larry Watanabe's answer.

You can, in fact, represent the time in microseconds using a single byte. You'll only be able to represent times within 255 microseconds of the start of your epoch, which may or may not be a significant limitation depending on your other requirements.

Robert Rossney