There are a couple of performance issues to address...
Don't Access the Same Table More Than Once, If Possible
Don't use a subquery for criteria that can be done without the need for referencing additional copies of the same table. It's acceptable if you need data from a copy of the table due to using aggregate functions (MAX, MIN, etc), though analytic functions (ROW_NUMBER, RANK, etc) might be more accommodating (assuming supported).
Don't Compare What You Don't Need To
If your parameter is NULL, and that means that you want any value for the columns you are comparing against, don't include filtration criteria. Statements like these:
WHERE a.Name LIKE '%' + ISNULL(@Name, N'') + '%'
...guarantee the optimizer will have to compare values for the name
column, wildcarding or not. Worse still in the case with LIKE
is that wildcarding the left side of the evaluation ensures that an index can't be used if one is present on the column being searched.
A better performing approach would be:
IF @Name IS NOT NULL
BEGIN
SELECT ...
FROM ...
WHERE a.name LIKE '%' + @Name + '%'
END
ELSE
BEGIN
SELECT ...
FROM ...
END
Well performing SQL is all about tailoring to exactly what you need. Which is why you should be considering dynamic SQL when you have queries with two or more independent criteria.
Use The Right Tool
The LIKE
operator isn't very efficient at searching text when you're checking for the existence of a string within text data. Full Text Search (FTS) technology was designed to address the shortcomings:
IF @Name IS NOT NULL
BEGIN
SELECT ...
FROM ...
WHERE CONTAINS(a.name, @Name)
END
ELSE
BEGIN
SELECT ...
FROM ...
END
Always Test & Compare
I agree with LittleBobbyTables - the solution ultimately relies on checking the query/execution plan for all the alternatives because table design & data can impact optimizer decision & performance. In SQL Server, the one with the lowest subtreecost is the most efficient, but it can change over time if the table statistics and indexes aren't maintained.