views:

25

answers:

2

In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was > 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be.

The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number > 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync.

Is there a better way?

A: 

What kind of SQL statement? Can you change it to still return a single row with all NULLs instead of no rows?

What stops it from returning more than one row? The package would fail if it ended up returning more than one row, right?

You could also change it to call a stored procedure, and then call the stored procedure in two places without code duplication. You could also change it to be a view or user-defined function (if parameters are needed), SELECT COUNT(*) FROM udf() to check if there is data, SELECT * FROM udf() to get the row.

Cade Roux
A: 

In that same condition you can have additional logic (&& or ||). I would take one of the variables for your single statement and say something to the effect:

If @User::rowcount>0 || @User:single_record_var!=Default

That should help.

Josef Richberg