views:

335

answers:

5

Is there any reason behind using date(January 1st, 1970) as standard for time manipulation? I have seen this standard in Java as well as in Python. These two languages I am aware of. Is other popular languages also follows the same standard?

Please describe.

+14  A: 

It is the standard of Unix time.

Unix time, or POSIX time, is a system for describing points in time, defined as the number of seconds elapsed since midnight proleptic Coordinated Universal Time (UTC) of January 1, 1970, not counting leap seconds.

Soldier.moth
Do you know if Kernighan and Thompson every expressed a reason for choosing that moment beyond "It's a round number slightly before we started building the thing."?
dmckee
It's the start of a year, it's in the zero timezone (Zulu). Both of those make the date formatting code simpler.
Donal Fellows
Doesn't count leap seconds? I did not know that detail. After thinking about it for a few moments, I can see why you'd do it that way, but man. my world is shattered. by 24 seconds.
keturn
+1  A: 

Yes, C (and its family). This is where Java took it too.

Péter Török
Thanks for pointing. But why?
Vijay Shanker
+4  A: 

January 1st, 1970 is the zero-point of POSIX time.

honk
A: 

It's actually a UTS (Unix TimeStamp) For more info see: www.unixtimestamp.com or wikipedia article

P.S. I think that more interesting question is what will happen with all the legacy systems after 2038 when the UTS will overflow its 32-bit format :D

Mike
You're still using a 32-bit `time_t`?
Donal Fellows
Nope, but there are millions of legacy software that does
Mike
it will happen what happened for y2k: news ablaze, mass hysteria, and then nothing. In any case, why bother ? We will never get there. The Maya solved the issue preemptively.... :D
Stefano Borini
A: 

Is there any reason behind using date(January 1st, 1970) as standard for time manipulation?

No reason that matters.

Python's time module is the C library. Ask Ken Thompson why he chose that date for an epochal date. Maybe it was someone's birthday.

Excel uses two different epochs. Any reason why different version of excel use different dates?

Except for the actual programmer, no one else will ever know why those those kinds of decisions were made.

And...

It does not matter why the date was chosen. It just was.

Astronomers use their own epochal date: http://en.wikipedia.org/wiki/Epoch_(astronomy)

Why? A date has to be chosen to make the math work out. Any random date will work.

A date far in the past avoids negative numbers for the general case.

Some of the smarter packages use the proleptic Gregorian year 1. Any reason why year 1?
There's a reason given in books like Calendrical Calculations: it's mathematically slightly simpler.

But if you think about it, the difference between 1/1/1 and 1/1/1970 is just 1969, a trivial mathematical offset.

S.Lott
If 1/1/1 had been chosen we would have run out of seconds (2^31) by now. As it stands, we face a Y2K like issue in 2038 for 32 bit operating systems. http://en.wikipedia.org/wiki/Year_2038_problem
Chris Nava
@Chris Nava: The folks that use 1/1/1 count days, not seconds. 2 billion days is about 5 million years. Often they keep a (day,time) pair to maximize time resolution; there are only 86400 seconds in most days.
S.Lott
@S.Lott: Yes. I was just pointing out that since most software counts seconds (not minutes) since the epoch, 1/1/1 was to far in the past to be a reasonable start date. Therefore, a more recent date was chosen as the computer epoch (and by association the start of the IT revolution. ;-)
Chris Nava
@Chris Nava: "most"? I assume by "most" you mean "Linux". Other OS's don't work the same way Linux does. The issue is that "reasonable" and "why 1/1/1970?" aren't easy questions to answer; most importantly, the answer doesn't matter. "reasonable" is true, but it's not the reason **why**. The reason **why** is something only Ken Thompson can answer.
S.Lott