This answer focuses mainly on 'select' vs update/create/delete operations. I think it's rarer to update more than one or a few records at a time, and so I also think 'select' is where the bottlenecks tend to occur. That said, you need to know your application (profile). The best place to focus your optimization time is almost always at the database level in the queries themselves, rather than the client code. However, I do have plenty to say on how to build the client code:
I have a generic method for select queries/procedures in my data layer that looks something like this:
private static IEnumerable<IDataRecord> Retrieve(string sql, Action<SqlCommand> addParameters)
{
//ConnectionString is a private static property in the data layer
// You can implement it to read from a config file or elsewhere
using (var cn = new SqlConnection(ConnectionString))
using (var cmd = new SqlCommand(sql, cn))
{
addParameters(cmd);
cn.Open();
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read())
yield return rdr;
rdr.Close();
}
}
}
And that lets me write public data layer methods that use anonymous methods to add the parameters. The code shown works with .Net 2.0+, but can be written even shorter using .Net 3.5:
public IEnumerable<IDataRecord> GetFooChildrenByParentID(int ParentID)
{
//I could easily use a stored procedure name instead of a full sql query
return Retrieve(
"SELECT c.*
FROM [ParentTable] p
INNER JOIN [ChildTable] c ON c.ParentID = f.ID
WHERE f.ID= @ParentID", delegate(SqlCommand cmd)
{
cmd.Parameters.Add("@ParentID", SqlDbType.Int).Value = ParentID;
}
);
}
I'm gonna stop right here so I can point you again to the code just above that uses the anonymous method for parameter creation.
This is very clean code, in that it puts the query definition and parameter creation in the same place while still allowing you to abstract away the boilerplate database connection/calling code to somewhere more re-usable. I don't think this technique is covered by any of the bullet points in your question, and it happens to be pretty darn fast as well. I think this about covers the thrust of your question.
I want to continue, though, to explain how this all fits together. The rest is fairly straightforward, but it's also easy to throw this to a list or similar and get things wrong, ultimately hurting performance. So moving on, the business layer then uses a factory to translate query results to objects (c# 3.0 or later):
public static Foo FooFactory(IDataRecord record)
{
return new FooChild
{
FooChild.Property1 = record[0];
FooChild.Property2 = record[1];
//...
}
}
So that I can tie it all together in like this (requires .Net 3.5):
public IEnumerable<Foo> GetFoosByParent(FooParent parent)
{
return GetFooChildrenByParentID(parent.ID))
.Select(f => ObjectFactories.FooFactory(f));
}
Obviously this last method can be expanded to include any additional business logic needed. It also turns out this code is exceptionally fast, because it takes advantage of the lazy evaluation features of IEnumerable. The downside is that it tends to create a lot of short-lived objects, and that can hurt the transactional performance you asked about. To get around this I sometimes break good n-tier and pass the IDataRecord objects directly to the presentation tier and avoid unnecessary object creation for items that are just going to bound to a grid control right away.
Update/Create code works is similar, with the difference that you're usually only changing one record at a time rather than many.
Or, I could save you reading this long post and just tell you to use Entity Framework ;)