Obviously, a = b and b = c => a = c - this is related to transitive closure. The point Joel was making is that some SQL servers are poor at optimizing queries, so some of the SQL queries might be written with "extra" qualifiers as in the example.
In this example, remember that a, b and c as above often refer to different tables, and operations like a=b are performed as joins. Suppose the number of entries in table a is 1000, b is 500 and c is 20. Then join of a, b needs 1000x500 row comparisons (this is my dumb example; in practice there might be much better join algorithms that would reduce the complexity a lot), and b,c needs 500x20 comparisons. An optimizing compiler will determine that the join of b,c should be performed first and then the result should be joined on a = b since there are fewer expected rows with b=c. In total there are about 500x20 + 500x1000 comparisons for (b=c) and (a=b) respectively. After that intersections have to be computed between the returned rows (I guess also via joins, but not sure).
Suppose the Sql server could have a logic inference module that would also infer that this means a = c. Then it would probably perform join of b,c and then join of a,c (again this is a hypothetical case). This would take 500x20 + 1000x20 comparisons and after that intersection computations. If expected #(a=c) is lesser (due to some domain knowledge) then the second query will be a lot faster.
Overall my answer has become too long, but this means that SQL query optimization is not a trivial task, and that is why some SQL servers may not do it very well.
More can be found at http://en.wikipedia.org/wiki/Query_optimizer or from some expect on databases reading this.
But philosophically speaking, SQL (as an abstraction) was meant to hide all aspects of implementation. It was meant to be declarative (a SQL server can itself use sql query optimization techniques to rephrase the query to make them more efficient). But in the real world it is not so - often the database queries have to be rewritten by humans to make them more efficient.
Overall, the point of the article is that an abstraction can only be so good, and no abstraction is perfect.