views:

102

answers:

3

Oracle allows to specify precision of TIMESTAMP type in a table - the number of digits in the fractional part of the SECOND datetime field. Are there any disadvantages of specifying maximal precision TIMESTAMP(9)?

One reason I could think is that this information may be used for prettier output by Oracle tools.

Maximum of 9 digits suggests that the field is stored as a 4 byte integer so it should not have any performance implications, please correct if I'm wrong here.

A: 

The problem is performance. You must trade it with precision. Smaller numbers are readen and written in fewer CPU instruction. A CPU instruction takes less than a nanosecond, but if your server serves millions of transactions you might find some performance decrease, and this suggests you to adopt less precision, or even no precision (round all timestamps to the seconds is quite acceptable in most scenario, even in banking).

But if you, for some reason, ie. real-time system logging, need more precision, you are forced to use a higher precision and thus get a performance decrease. If your server doesn't process a great number of tps you have almost no performance impact, but if you don't need precision you're wasting memory.

Hope to have been of help. If you want to share with us your DB requirements, we might help you choose your best compromise.

djechelon
How exactly does TIMESTAMP(1) differ from TIMESTAMP(9) from performance perspective? Are second fractions not stored as an integer anyway, so that shouldn't affect CPU at all in searches for example. Are you sure that TIMESTAMP(1) occupies less memory internally than TIMESTAMP(9)?
Leonid
You just hit the point! SQL is declarative, no you have no guarantee that TIMESTAMP(1) is smaller than TIMESTAMP(9). For example, Oracle for 64-bit platform may decide to store all timestamps, no matter your choice, as 64-bit integers without telling you. But if you specify TIMESTAMP(9), Oracle grants you that it won't ever lose precision. About performance, I repeat, you _might_ notice performance loss only on **very heavy** loads, which could be unlikely or not depending on your application
djechelon
My question is where is **the performance loss** coming from in case of timestamp if Oracle uses integers to store fractions of second? TIMESSTAMP here in question is Oracle type and it has nothing to do with SQL.
Leonid
Simply if it's stored as 64-bit on a 32-bit machine it takes double time to be processed
djechelon
Yes, but TIMESTAMP(1) and TIMESTAMP(9) would not be any different on a fixed platform. In one case it's 1 digit after decimal point, in another 9, in both cases we need an platform dependent size integer.
Leonid
Why do you have the assumption that Oracle stores TIMESTAMPs as integers? Oracle's internal representation of DATEs and TIMESTAMPs aren't integers - Dates are 7-8 bytes, TIMESTAMPS I've seen up to 22 bytes.
Adam Musch
That seems the most logical assumption to me given that the maximum allowed precision is 9 digits. And the maximum unsigned 4 byte integer is around that size. Can you see any reason not to store it as an integer?
Leonid
+3  A: 

There are no disadvantages, use timestamp(9) if it makes sense.

Timestamp(9) and timestamp(1) use the same amount of space, and their performance is identical. I could only find one case where there was a performance difference, and in that case timestamp(9) was actually faster than timestamp(1).

(I'll spare you the many lines of boring code inserting into timestamp(1) and timestamp(9) columns and comparing different operations on them.)

This demonstrates that they use the same amount of space (inserting many values and comparing dba_segments):

--Create tables with timestamps and populate them with the same data (with different precision)
--Set initial and next to a low value so we can closely check the segment size)
create table timestamp1 (t1 timestamp(1), t2 timestamp(1), t3 timestamp(1), t4 timestamp(1), t5 timestamp(1))
storage(initial 65536 next 65536);

insert into timestamp1
select current_timestamp(1), current_timestamp(1), current_timestamp(1), current_timestamp(1), current_timestamp(1)
from dual connect by level <= 100000;

create table timestamp9 (t1 timestamp(9), t2 timestamp(9), t3 timestamp(9), t4 timestamp(9), t5 timestamp(9))
storage(initial 65536 next 65536);

insert into timestamp9
select current_timestamp(9), current_timestamp(9), current_timestamp(9), current_timestamp(9), current_timestamp(9)
from dual connect by level <= 100000;


--Segment size is identical
select segment_name, bytes from dba_segments where segment_name in ('TIMESTAMP1', 'TIMESTAMP9');

--SEGMENT_NAME   BYTES
--TIMESTAMP1     8388608
--TIMESTAMP9     8388608

This is where timestamp(9) is faster, when using current_timestamp, which you'll probably need to use at some point to generate the data. But we're only talking about the difference between about 0.175 and 0.25 seconds on my slow desktop to generate 100K timestamps. I'm not sure why timestamp(9) is faster, maybe timestamps are always generated as timestamp(9) and then rounded to other precisions?

--current_timestamp(9) is slightly faster than current_timestamp(1)
select count(*) from
(
  select *
  from dual
  --where current_timestamp(9) = current_timestamp(9)
  where current_timestamp(1) = current_timestamp(1)
  connect by level <= 100000
);

EDIT: The performance difference exists in 10g but not 11g.

jonearles
A: 

The difference is not in the technical use of the Timestamp data type, but the application. FERC and NERC often require a certain precision when used in applications labeled critical infrastructure and as such they will use the highest precision made available.

Of course, making the suits happy with their sequence of events records often requires doing more than laid out by CIP-002 through CIP-009

entens