G'day,
I'll ask the obvious here, did you check the date and time on both machines?
Edit: ... and the MySQL timezone was the same on both machines?
Update: Ok. The problem is in the fact that the timestamp string being passed into UNIX_TIMESTAMP is interpreted to be a value in the current timezone which is then converted back to UTC so, because you're in MEZ, two hours is subtracted to return it back to UTC so 7200 is subtracted from your timestamp when it is converted back to a Unix timestring.
Hence, the variation you see when using UNIX_TIMESTAMP() to convert is back to a Unix Epoch timestring.
BTW Shouldn't you be using a TIMESTAMP type for storing off your UTC_TIMESTAMPs instead of DATETIME type?
Update: Decoupling presentation time from stored time is definitely the way to go. You can then reuse the same data all around the world and only have to convert to and from local time when you are presenting the data to a user.
If you don't do this then you are going to have to store off the timezone when the timestamp was made and then go into all sorts of complicated permutations of having to work out if
- the local timezone was in daylight saving time when it was stored,
- what the difference is between the timezone at the time that the data was stored and the timezone where the data is to be presented.
Leaving it all storeed as UTC gets rid of that.
Most users won't be that happy if they have to work out the local time themselves based on the UTC time returned so systems usually convert to current local time for the user.
This is of course if the user wants the data expressed in local time which is usually the case. The only widely used system I can think of, off the top of my head, that stores and presents its data in UTC is system for air traffic control and flight plan management which are always kept in UTC (or ZULU time to be more precise).
HTH
cheers,