views:

971

answers:

3

I'm trying to convert the permanent tables used in a stored procedure to
global temp tables. I've looked at the stats on these permanent tables
and some have tens of millions of rows of data and are on the order if
gigabytes in size (up to 10 GB).

So,

CREATE TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc...
)
TABLESPACE BIGTABLESPACE
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
`MONITORING;

should become

CREATE GLOBAL TEMPORARY TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc..
)
ON COMMIT PRESERVE ROWS;

I'm creating an equivalent global temporary table with rows that should be
preserved until the end of the session for each existing permanent table.
This global temp table will be used in the procedure instead of the permanent table.
(EXEC IMMEDIATE TRUNCATE at the start, and INSERT /*+ APPEND */ INTO at some later point)

All of the permanent tables have been created in a big tablespace
BIGTABLESPACE

The Oracle docs state that the global temporary table will be created in
the user's temp tablespace (I assume this is TEMP).
The problem with this is that the TEMP tablespace is small and the
extents are not set to grow to the size I need them to grow during the
procedure.

The TEMP tablespace was created during the database creation

create database "$oracle_sid"
user sys identified by "$sys_password"
user system identified by "$system_password"
set default bigfile tablespace
controlfile reuse
maxdatafiles 256
maxinstances $maxinstances
maxlogfiles 16
maxlogmembers 3
maxloghistory 1600
noarchivelog
character set WE8MSWIN1252
national character set AL16UTF16
datafile
'$oracle_home/oradata/$oracle_sid/system01.dbf' size 512M
logfile
'$oracle_home/oradata/$oracle_sid/redo01.log' size 1G,
'$oracle_home/oradata/$oracle_sid/redo02.log' size 1G,
'$oracle_home/oradata/$oracle_sid/redo03.log' size 1G
sysaux datafile
'$oracle_home/oradata/$oracle_sid/sysaux01.dbf' size 256M
default temporary tablespace temp tempfile
'$oracle_home/oradata/$oracle_sid/temp01.dbf' size 5G
undo tablespace "UNDOTBS1" datafile
'$oracle_home/oradata/$oracle_sid/undotbs01.dbf' size 5G;

The permanent tables (that I'm planning to replace) were originally
created in tablespace BIGTABLESPACE

-- 50G bigfile datafile size
create bigfile tablespace "BIGTABLESPACE"
datafile '$oracle_home/oradata/$oracle_sid/bts01.dbf' size 50G
extent management local
segment space management auto;

The permanent table indexes were originally created in tablespace
BIGTABLESPACE

-- 20G bigfile datafile size
create bigfile tablespace "BIGINDXSPACE"
datafile '$oracle_home/oradata/$oracle_sid/btsindx01.dbf' size 20G
extent management local
segment space management auto;

Is replacing these permanent tables with global temporary tables feasable?
The TEMP tablespace will run into a problem extending the TEMP tablespace. Is there a way to create global temporary tables and their indexes in tablespaces BIGTABLESPACE and BIGINDXSPACE? If not, how can I make the TEMP tablespace behave like a bigfile tablespace and achieve index/table separation? Can I create two TEMP bigfile tablespaces and create indexes into one and tables into another?

I want to use global temporary tables, but the volume of data I am handling in the procedure would seem to be above and beyond the indended design of global temporary tables. Any suggestions?

+3  A: 

There is no benefit to separating data and indexes into separate tablespaces other than potentially making DBAs more comfortable that similar objects are grouped together. There is a long-standing myth that separating indexes and data was beneficial for performance reasons-- that is not correct.

Temporary objects should (and must) be stored in a temporary tablespace. You could increase the size of your TEMP tablespace or create a separate temporary tablespace just for the user(s) that will own these objects if you wanted to segregate these large temporary tables into a separate tablespace. You can't (and wouldn't want to) store them in your permanent tablespaces.

Architecturally, though, I would be very curious about why temporary tables were necessary in your system. If you have sessions that are writing 10's of GB into temporary tables, then presumably reading those 10's of GB out again in order to write the data somewhere else, I would tend to suspect that there were more efficient solutions. It is very rare in Oracle to even need temporary tables-- it is far more common in other databases where readers can block writers to need to copy data out of tables before working on it. Oracle has no such limitations.

Justin Cave
Here's why temporary tables would be necessary:The Oracle database I'm working on has one designated user with all privileges, all grants. A client side canned-SQL application is used to login as this user and start a session during whichany stored procedure may be called. These stored procedures use permanent tables. I do not want one session's procedure run to overlap with another's and overlap-double-insert data into the permanent tables used in the stored procedure.
Dragos Toader
But why are the stored procedures writing any transient data to tables in the first place? Particularly 10's of GB of transient data. That's the component of the design that is most confusing. If the data isn't being written permanently, then presumably you could just write a query against the underlying tables to get whatever data your procedures need.
Justin Cave
The transient data is written into tables because of complexity. Data is collected from multiple permanent source tables, then gets processed(joined, filtered, calculated) and eventually ends upgetting appended to multiple results tables (which are permanent).By joined I mean outer, inner or cartesian as needed.By filtered I mean WHERE clause criteria.By calculated I mean SELECT calculations/function calls.It's a complex multi-step process that is best handled in declarative SQL. SQL lends itself to simplicity. Plus there'sdata left over to debug.
Dragos Toader
A: 

I looked at large sized Global Temporary Tables for a migration exercise. It worked but for debugging and rejection hadling I eventually went with plain tables.

If the GTTs don't work out, consider either Row-Level Security / VPD (or even views). You can have a column derived from sys_context('USERENV','SESSIONID') and use that to ensure that the user can only see their own data.

Still the thought of multiple sessions dealing with multi-gigabyte datasets concurrently is a bit scary.

PS. I believe that for GTTs used through a procedure use the temp tablespace of the session user rather than the temp tablespace of the procedure owner. If you can get the sessions as separate oracle users then you have a chance at spreading your file IO over different tablespaces.

Gary
+1  A: 

I don't think that there's anything in your description that makes GTT's unattractive. You obviously need very large temporary tablespaces but you're not consuming more space overall unless you've been making heavy use of table compression (unavailable in GTT's at least up to 10gR2). Look into the use of tablespace groups: http://download.oracle.com/docs/cd/B19306%5F01/server.102/b14231/tspaces.htm#ADMIN01103

Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces.

Also, don't neglect the use of subquery factoring clauses. They can often replace the use of temporary tables. However they might still require just as much temporary storage space because a large result set from a SQFC can spill to disk to avoid the consumption of too much memory, so you still have to go ahead with the increase in TEMP space. They're very handy for not having to deploy a new database object every time you need a new temporary table.

David Aldridge