A lot of web applications having a 3 tier architecture are doing all the processing in the app server and use the database for persistence just to have database independence. After paying a huge amount for a database, doing all the processing including batch at the app server and not using the power of the database seems to be a waste. I have a difficulty in convincing people that we need to use best of both worlds.
It depends on your application. You should set things up so your database does things databases are good for. An eight-table join across tens of millions of records is not something you're going to want to handle in your application tier. Nor is performing aggregate operations on millions of rows to emit little pieces of summary information.
On the other hand, if you're just doing a lot of CRUD, you're not losing much by treating that large expensive database as a dumb repository. But simple data models that lend themselves to application-focused "processing" sometimes end up leading you down the road to creeping unforeseen inefficiencies. Design knots. You find yourself processing recordsets in the application tier. Looking things up in ways that begin to approximate SQL joins. Eventually you painfully refactor these things back to the database tier where they run orders of magnitude more efficiently...
So, it depends.
What "power" of the database are you not using in a 3-tier archiecture? Presumably we exploit SQL to the full, and all the data management, paging, caching, indexing, query optimisation and locking capabilities.
I'd guess that the argument is where what we might call "business logic" should be implemented. In the app server or in database stored procedure.
I see two reasons for putting it in the app server:
1). Scalability. It's comparatively hard to add more datbase engines if the DB gets too busy. Partitioning data across multiple databases is really tricky. So instead pull the business logic out to the app server tier. Now we can have many app server instances all doing business logic.
2). Maintainability. In principle, Stored Procedure code can be well-written, modularised and resuable. In practice it seems much easier to write maintainable code in an OO language such as C# or Java. For some reason re-use in Stored Procedures seems to happen by cut and paste, and so over time the business logic becomes hard to maintain. I would concede that with discipline this need not happen, but discipline seems to be in short supply right now.
We do need to be careful to truly exploit the database query capabilities to the full, for example avoiding pulling large amounts of data across to the app server tier.
I've seen one application designed (by a pretty smart guy) with tables of the form:
id | one or two other indexed columns | big_chunk_of_serialised_data
Access to that in the application is easy: there are methods that will load one (or a set) of objects, deserialising it as necessary. And there are methods that will serialise an object into the database.
But as expected (but only in hindsight, sadly), there are so many cases where we want to query the DB in some way outside that application! This is worked around is various ways: an ad-hoc query interface in the app (which adds several layers of indirection to getting the data); reuse of some parts of the app code; hand-written deserialisation code (sometimes in other languages); and simply having to do without any fields that are in the deserialised chunk.
I can readily imagine the same thing occurring for almost any app: it's just handy to be able to access your data. Consequently I think I'd be pretty averse to storing serialised data in a real DB -- with possible exceptions where the saving outweighs the increase in complexity (an example being storing an array of 32-bit ints).
No. They should be used for business rules enforcement as well.
Alas the DBMS big dogs are either not competent enough or not willing to support this, making this ideal impossible, and keeping their customers hostage to their major cash cows.