views:

804

answers:

2

I'm implementing 'check' constraints that simply call a CLR function for each constrained column.

Each CLR function is one or two lines of code that attempts to construct an instance of the user-defined C# data class associated with that column. For example, a "Score" class has a constructor which throws a meaningful error message when construction fails (i.e. when the score is outside a valid range).

First, what do you think of that approach? For me, it centralizes my data types in C#, making them available throughout my application, while also enforcing the same constraints within the database, so it prevents invalid manual edits in management studio that non-programmers may try to make. It's working well so far, although updating the assembly causes constraints to be disabled, requiring a recheck of all constraints (which is perfectly reasonable). I use DBCC CHECKCONSTRAINTS WITH ALL_CONSTRAINTS to make sure the data in all tables is still valid for enabled and disabled constraints, making corrections as necessary, until there are no errors. Then I re-enable the constraints on all the tables via ALTER TABLE [tablename] WITH CHECK CHECK CONSTRAINT ALL. Is there a T-SQL statement to re-enable with check all check constraints on ALL tables, or do I have to re-enable them table by table?

Finally, for the CLR functions used in the check constraints, I can either:

  1. Include a try/catch in each function to catch data construction errors, returning false on error, and true on success, so that the CLR doesn't raise an error in the database engine, or...
  2. Leave out the try/catch, just construct the instance and return true, allowing that aforementioned 'meaningful' error message to be raised in the database engine.

I prefer 2, because my functions are simpler without the error code, and when someone using management studio makes an invalid column edit, they'll get the meaningful message from the CLR like "Value for type X didn't match regular expression '^p[1-9]\d?$'" instead of some generic SQL error like "constraint violated". Are there any severe negative consequences of allowing CLR errors through to SQL Server, or is it just like any other insert/update failure resulting from a constraint violation?

+2  A: 

For example, a "Score" class has a constructor which throws a meaningful error message when construction fails (i.e. when the score is outside a valid range). First, what do you think of that approach?

It worries me a bit, because calling a ctor requires memory allocation, which is relatively expensive. For each row inserted, you're calling a ctor -- and only for its side-effects.

Also expensive are exceptions. They're great when you need them, but this is a case where you vould use them in a ctor context, but not in a check context.

A refactoring could reduce both costs, by having the check exist as a class static or free function, then both the check constraint and the ctor could call that:

class Score {

  private:
   int score;

  public:
   static bool valid( int score ) { 
    return score > 0  ; 
   }

    Score( int s ) {
    if( ! valid( s ) ) { 
      throw InvalidParameter();
    }

    score = s;
   }
}

Check constraint calls Score::valid(), no construction or exception needed.

Of course, you still have the overhead, for each row, of a CLR call. Whether that's acceptable is something you'll have to decide.

Is there a T-SQL statement to re-enable with check all check constraints on ALL tables, or do I have to re-enable them table by table?

No, but you can do this to generate the commands:

select 'ALTER TABLE ' || name || ' WITH CHECK CHECK CONSTRAINT ALL;'
 from sys.tables ;

and then run the resultset against the database.

Comments from the OP:

I use base classes called ConstrainedNumber and RegexConstrainedString for all my data types. I could easily move those two classes' simple constructor code to a separate public boolean IsValueValid method as you suggested, and probably will.

The CLR overhead (and memory allocation) would only occur for inserts and updates. Given the simplicity of the methods, and rate at which table updates will occur, I don't think the performance impact will anything to worry about for my system.

I still really want to raise exceptions for the information they'll provide to management studio users. I like the IsValueValid method, because it gives me the 'option' of not throwing errors. Within applications using my data types, I could still get the exception by constructing an instance :)

I'm not sure I agree with the exception throwing, but again, the "take-home message" is that by decomposing the problem into parts, you can select what parts you're wiling to pay for, without paying for parts you don't use. The ctor you don't use, because you were only calling it to get the side-effect. So we decomposed creation and checking. We can further decompose throwing:

class Score {

  private:
   int score;

  public:
   static bool IsValid( int score ) { 
    return score > 0  ; 
   }

   static checkValid( int score ) {
    if( ! isValid( s ) ) { 
      throw InvalidParameter();
    }


    Score( int s ) {
    checkValid( s ) ;    
    score = s;
   }
}

Now a user can call the ctor, and get the check and possible exception and construction, call checkValid and get the check and exception, or isValid to just get the validity, paying the runtime cost for only what he needs.

tpdi
I use base classes called ConstrainedNumber<T> and RegexConstrainedString for all my data types. I could easily move those two classes' simple constructor code to a separate public boolean IsValueValid method as you suggested, and probably will.
Triynko
The CLR overhead (and memory allocation) would only occur for inserts and updates. Given the simplicity of the methods, and rate at which table updates will occur, I don't think the performance impact will anything to worry about for my system.
Triynko
I still really want to raise exceptions for the information they'll provide to management studio users. I like the IsValueValid method, because it gives me the 'option' of not throwing errors. Within applications using my data types, I could still get the exception by constructing an instance :)
Triynko
In response to edits... see next post.
Triynko
A: 

Some clarification. These data classes set one level above the primitives types, constraining data to make it meaningful.

Actually, they sit just above the RegexConstrainedString and ConstrainedNumber<T> classes, which is where we're talking about refactoring the constructor's validation code into a separate method.

The problem with refactoring the validation code, is that the Regex necessary for validation exists only in the subclasses of RegexConstrainedString, since each subclass has a different Regex. This means that the validation data is only available to the RegexConstrainedString's constructor, not any of it's methods. So, if I factor out the validation code, callers would need access to the Regex.

public class Password: RegexConstrainedString
{
    internal static readonly Regex regex = CreateRegex_CS_SL_EC( @"^[\w!""#\$%&'\(\)\*\+,-\./:;<=>\?@\[\\\]\^_`{}~]{3,20}$" );

    public Password( string value ): base( value.TrimEnd(), regex ) {} //length enforced by regex, so no min/max values specified
    public Password( Password original ): base( original ) {}
    public static explicit operator Password( string value ) {return new Password( value );}
}

So, when reading a value from the database or reading user input, the Password constructor forwards the Regex to the base class to handle the validation. Another trick is that it trims the end characters automatically, in case the database type is char rather than varchar, so I don't have to remember to do it. Anyway, here is what the main constructor for RegexConstrainedString looks like:

protected RegexConstrainedString( string value, Regex subclasses_static_regex, int? min_length, int? max_length )
{
    _value  = (value ?? String.Empty);
    if (min_length != null)
     if (_value.Length < min_length)
      throw new Exception( "Value doesn't meet minimum length of " + min_length + " characters." );
    if (max_length != null)
     if (_value.Length > max_length)
      throw new Exception( "Value exceeds maximum length of " + max_length + " characters." );
    value_match = subclasses_static_regex.Match( _value ); //Match.Synchronized( subclasses_static_regex.Match( _value ) );
    if (!value_match.Success)
     throw new Exception( "Invalid value specified (" + _value + "). \nValue must match regex:" + subclasses_static_regex.ToString() );
}

Since callers would need access to the subclass's Regex, I think my best bet is to implement a IsValueValid method in the subclass, which forwards the data to the IsValueValid method in the RegexConstrainedString base class. In other words, I would add this line to the Password class:

public static bool IsValueValid( string value ) {return IsValueValid( value.TrimEnd(), regex, min_length, max_length );}

I don't like this however, because I'm replicating the subclasses constructor code, having to remember to trim the string again and pass the same min/max lengths when necessary. This requirement would be forced upon all subclasses of RegexConstrainedString, and it's not something I want to do. These data classes like Password is so simple, because RegexConstrainedString handles most of the work, implementing operators, comparisons, cloning, etc.

Furthermore, there are other complications with factoring out the code. The validation involves running and storing a Regex match in the instance, since some data types may have properties that report on specific elements of the string. For example, my SessionID class contains properties like TimeStamp, which return a matched group from the Match stored in the data class instance. The bottom line is that this static method is an entirely different context. Since it's essentially incompatible with the constructor context, the constructor cannot use it, so I would end up replicating code once again.

So... I could factor out the validation code by replicating it and tweaking it for a static context and imposing requirements on subclasses, or I could keep things much simpler and just perform the object construction. The relative extra memory allocated would be minimal, as only a string and Match reference is stored in the instance. Everything else, such as the Match and the string itself would still be generated by the validation anyway, so there's no way around that. I could worry about the performance all day, but my experience has been that correctness is more important, because correctness often leads to numerous other optimizations. For example, I don't ever have to worry about improperly formatted or sized data flowing through my application, because only meaningful data types are used, which forces validation to the point-of-entry into the application from other tiers, be it database or UI. 99% of my validation code was removed as a no-longer-necessary artifact, and I find myself only checking for nulls nowadays. Incidentally, having reached this point, I now understand why including nulls was the billion dollar mistake. Seems to be the only thing I have to check for anymore, even though they are essentially non-existent in my system. Complex objects having these data types as fields cannot be constructed with nulls, but I have to enforce that in the property setters, which is irritating, because they otherwise would never need validation code... only code that runs in response to changes in values.

UPDATE: I simulated the CLR function calls both ways, and found that when all data is valid, the performance difference is only fractions of a millisecond per thousand calls, which is negligible. However, when roughly half the passwords are invalid, throwing exceptions in the "instantiation" version, it's three orders of magnitude slower, which equates to about 1 extra sec per 1000 calls. The magnitudes of difference will of course multiple as multiple CLR calls are made for multiple columns in the table, but that's a factor of 3 to 5 for my project. So, is an extra 3 - 5 second per 1000 updates acceptable to me, as a trade off for keeping my code very simple and clean? Well that depends on the update rate. If my application were getting 1000 updates per second, a 3 - 5 second delay would be devastating. If, on the other hand, I was getting 1000 updates a minute or an hour, it may be perfectly acceptable. In my situation, I can tell you now that it's quite acceptable, so I think I'll just go with the instantiation method, and allow the errors through. Of course, in this test I handled the errors in the CLR instead of letting SQL Server handle them. Marshalling the error info to SQL Server, and then possibly back to the application, could definitely slow things down much more. I guess I will have to fully implement this to get a real test, but from this preliminary test, I'm pretty sure what the results will be.

Triynko