tags:

views:

708

answers:

7

I am working with a team that develops an application which works on SQL Server and Oracle.

SQL Server has the concept of an instance, which can house multiple databases. Oracle 10g requires one instance per database (and can allow more for redundancy), so for each database we run, we have a completely separate set of processes, and thus much greater memory usage.

Because of this, we've started moving more to having one instance with separate schemas. However, we still want to keep separate customers' (or dev machines') data separate.

Most of our instances are created using locally developed scripts (for example, oradim on Windows).

What can be done to cut down on the memory usage requirements, footprint etc, of Oracle instances, so that multiple instances can run safely on one machine? Is Linux or Windows a better host? Can we make easy gains by disabling extra features (data mining, Oracle Text etc) that we don't need?

+2  A: 

Memory IMHO is the most important for Oracle performance.

Running multiple databases implies keeping multiple sets of cached SQL and PL/SQL, multiple data cache for system tables, etc.

If you just need to keep the data separate, you may create different TABLESPACES for different users. You'll still have to share LOGFILES though.

Disabling extra features like Data Mining won't help you much, as they do not consume memory when not being used.

You'll certainly need to lower any memory values, but it's hard to tell which you should keep and which you need to lower without seeing you database design.

As a very imprecise rule of thumb, if you have OLTP database, that is little tables and high level of concurrency, you probably should sacrifice sort_area_size and hash_area_size, but keep db_block_buffers as high as possible.

If you have large tables using HASH JOINS and MERGE JOINS, you'll need sort_area_size and hash_area_size for efficient joins and ordering, but you can decrease db_block_buffers, as you'll not be able to cache these tables anyway.

Linux and Windows do not differ much in terms of Oracle performance. Linux, though, graces LOCK_SGA. Windows tries to do it too, but can swap out in tough memory conditions.

Quassnoi
Maybe I got the wrong impression from the OP, but I thought he may want to start by adjusting the less fine grained MEMORY_TARGET vs. the various fine grained parameters (sort_area_size, hash_area_size ...). Adjusting MEMORY_TARGET may get him far enough.
RussellH
I can cut down the memory limits, but I had hoped I would just be able to cut down memory usage so that it wouldn't hit the limits. :) I am open to all suggestions and happy to dive in further if the simple approach doesn't work.
crb
Have you tried experimenting with these parameters? It is possible that Oracle is pre-allocating a bigger SGA than you really need.
RussellH
+1  A: 

Memory consumption will come from the following areas:

  1. per-instance SGA
  2. per-session UGA, PGA (sorting, etc.)
  3. misc. other per-process memory requirements not covered above (e.g. the stack and heap used by each oracle process, the memory used by the actual executable images, etc.)

The cost of 1 is worth it in order to keep dev's and customers from thrashing each other's buffer caches. Those instances that do not care about such things should be merged.

You can assume the cost of 2 as paid in one form or another by all RDBMSs (and solely a function of your RDBMS's configuration), so it's not a tradeoff.

I would estimate that the impact of 3 is best reduced on Linux because of fork and page sharing, so you only pay for what you use. On modern *nix systems, the fork-instead-of-multithreaded approach is extremely efficient from both a CPU as well as memory point of view, while offering numerous other advantages (you virtually need never fear memory leaks.)


These having been said, do not forget that memory readings from Windows' task manager or from top may be misleading as to the actual memory consumption of a process because of "double counting": the shared memory segment as well as the size of the actual oracle executables and dynamic link libraries may be counted for each process when in fact that memory is shared (and only used once) by all processes involved. Under Unix use "pmem" or "cat /proc//maps" to see how much memory is actually used by the process and how much is shared.

vladr
+1  A: 

MEMORY_TARGET is the parameter you want to set if you are using Oracle 11g.

http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams133.htm

As you add more database instances to the same server, you'll need to adjust this parameter downwards on all the other instances so that you don't use too much memory on the server and cause it to swap. On Oracle 10g, you set PGA_AGGREGATE_TARGET and SGA_TARGET.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams157.htm

Unfortunately, the more Oracle instances you add, the harder it gets to tune, and the slower the system gets.

I don't really have much information on Windows vs. Linux.

RussellH
+1  A: 

If your primary goal is separation of client data, there's no need to run multiple instances. There are two types of separation you can achieve. You're already achieving the first, logical separation, by putting each client in a separate schema. To physically separate the data, create a tablespace for each client. A tablespace is a grouping of actual database files. By using them, you control where your data is physically stored. Assign a client's tablespace as the "default tablespace" when you create a schema for a client.

Unless you want to do something like run different versions of Oracle on the same server, there's rarely a reason to fire up more than one instance. To get the most out of your single instance, get as much of it in memory as possible, and use tools like TOAD, or the included Oracle tools, to optimize allocation of memory to various Oracle processes.

Stephen Harmon
I was assuming there was some reason he needed separate instances (like gratuitous use of public synonyms in the code, for example), but I agree, one instance per server is optimal.
RussellH
+1  A: 

If you have separate schemas you have the data of separate customers separate. Why the need for more instances? One instance per machine is best for the producton environment.

For the developing environment you can make a different choice. But the development environment and the production environment should be separated.

tuinstoel
An obvious reason I overlooked when posting simplest is so we can continue to use "standard" application user and role names without worrying about a password change in one application affecting others (unfortunately, we also integrate a legacy app with no server component - it uses Oracle users).
crb
I don't understand what you mean. What is a "standard" application user name?
tuinstoel
A default user (or schema/role) name. For example, our legacy app still requires a (hard-coded) role like DOC, so our scripts create that role in each database. Users in that role could access data in other schemas also granted access. We try and work around with SCHEMA_DOC roles where possible.
crb
+1  A: 

Agree with 'one instance per server' concept. If you want to give some users preferential treatment over others or ensure each application gets a fair slice of the database look into resource management and profiles.

http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/admusers.htm#i1012785

http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2007.htm#sthref4352

http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/dbrm.htm#ADMIN027

Keeping data in separate schemas (and tablespaces at the physical level) should ensure appropriate separation of customer data. As long as you don't share tablespaces between schemas, tt becomes (relatively) trivial to use export and transportable tablespaces to extract and separate out individual schemas so you can move them between databases (and hence instances) if it is subsequently necessary to have them on their own server.

Gary
+1  A: 

Can you add memory to the server? You can buy 32 GB of memory for less than $1000.

Angelo Marcotullio