views:

107

answers:

7

In a CakePHP application, for unique constraints that are accounted for in the database, what is the benefit of having the same validation checks in the model?

I understand the benefit of having JS validation, but I believe this model validation makes an extra trip to the DB. I am 100% sure that certain validations are made in the DB so the model validation would simply be redundant.

The only benefit I see is the app recognizing the mistake and adjusting the view for the user accordingly (repopulating the fields and showing error message on the appropriate field; bettering the ux) but this could be achieved if there was a constraint naming convention and so the app could understand what the problem was with the save (existing method to do this now?)

+2  A: 

Quicker response times, less database load. The further out to the client you can do validation, i.e. JavaScript, the quicker it is. The major con is having to implement the same rules in multiple layers.

k_b
OMG Ponies
Validating a record in the model doesn't need a trip to the database. For JavaScript validation, I agree.
k_b
-k_b - Wouldn't a unique constraint need to access the database? I agree for other types of validation.
BWelfel
@BWelfel: Yes, it would. Typical data integrity matters should be left to the database. The models should do validation regarding i.e. business and formatting.
k_b
What about the whole ux side of things? Is there a way to get the benefits of what cake does w/ validation in models while only needing the constraints of the db?
BWelfel
+1  A: 

Just about any benefit that you might gain would probably be canceled out by the hassle of maintaining the contraints in duplicate. Unless you happen to have an easy mechanism for specifying them in a single location and keeping them in sync, I would recommend sticking with one location of the other.

Bradley Harris
A: 

If database constraints are coded by one person and the rest of the code is code by another, they really shouldn't trust each other completely. Check things at boundaries, especially if they represent organizational-people boundaries. e.g. user to application or one developers module to another, or one corporate department to another.

MatthewMartin
A: 

Ideally the design of the model should come first (based on user stories, use cases, etc.) with the database schema deriving from the model. Then the database implementation can either be generated from the model (explicitly tying both to a single source), or the database constraints can be designed based on relational integrity requirements (which are conceptually different from, and generally have a different granularity and vocabulary than, the model, although in many cases there is a mapping of some kind.

I generally have in mind only relational integrity requirements for database constraints. There are too many cases where the granularity and universal applicability of business constraints are too incongruent, transitory, and finer-grained than the database designer knows about; and they change more frequently over time and across application modules.

le dorfier
A: 

@ le dorfier

Matters of data integrity and matters of business rules are one and the same thing (modulo the kind of necessarily procedural "business rules "stuff such as "when a new customer is entered, a mail with such-and-so content must be sent to that customer's email address).

The relational algebra is generally accepted to be "expressively complete" (I've even been told there is formal proof that RA plus TC is Turing complete). Therefore, RA (plus TC) can express "everything that is wrong" (wrong in the sense that it violates some/any arbitrary "business rule").

So enforcing a rule that "the set of things that are wrong must remain empty" boils down to the dbms enforcing each possible conceivable (data-related, i.e. state-related) business rule for you, without any programmer having to write the first byte of code to achieve such business rule enforcement.

You bring up "business rules change frequently" as an argument. If business rules change, then what would be the scenario that allows for the quickest adaptation of the system to such a business rule : only having to change the corresponding expression of the RA that enforces the constraint/business rule, or having to go find out where in the application code the rule is enforced and having to change all that ?

@ bradley harris.

Correct. I'd vote you up if voting was available to me. But you forget to mention that since one can never be really certain that some database will never be needed by some other app, the only sensible place to do business rules enforcement is inside the DBMS.

Erwin Smout
I prefer not to be quite so absolute about where the business rules belong. RDBMS tools and constructs certainly do a great job of enforcing rules no matter who accesses or changes the data. Often times it is just as sensible to trust these to an application framework such as Rails in favor of faster development in a single language. I don't believe that one solution ever really fits all.
Bradley Harris
+1  A: 

Validation in CakePHP happens before save or update query is sent to the database. Therefore it reduces the database load. You are wrong in your belief that the model validation makes an extra trip to the database. By default, validation occurs before save.

bancer
Unique constraint does go to the database.
BWelfel
I see what you meant now. This is the query for the table `groups` that checks the unique field: `SELECT COUNT(*) AS `count` FROM `groups` AS `Group` WHERE `Group`.`name` = 'new group'`. I think the benefit is that this query does not return an error. If you do not check the unique field in model you get the query `INSERT INTO `groups` (`name`, `parent_id`) VALUES ('new group', NULL)` with `SQL Error: 1062: Duplicate entry 'new group' for key 'name'`. Probably, it would not be in English if I have another locale.
bancer
Maybe it is good for saveAll() method for the databases that have no transactions support.
bancer
A: 

Don't forget the matter of portability. Enforcing validation in the model keeps your application database-agnostic. You can program an application against a SQLite database, and then deploy to MySQL.. oh wait, you don't have that.. PostgreSQL? No? Oh, Oracle, fine.

Also, in production, if a database error happens on a typical controller action that saves and then redirects, your user will be stuck staring at a blank white page (since errors are off, there was no view to output, and the redirect never happened). Basically, database errors are turned off in production mode as they can give insight into the DB schema, whereas model validation errors remain enabled as they are user-friendly.

You have a point though, is it possible to capture these database errors and do something useful with them? Currently no, but it would be nice if CakePHP could dynamically translate them into failed model validation rules, preventing us from repeating ourselves. Different databases throw different looking errors, so each DBO datasource would need updated to support this before it could happen.

deizel