I'm noticing exponential performance degradation in MySQL query execution times when queries include calls to UDF's in the SELECT or WHERE clauses. The UDF's in question query local tables to return scalar values - so they're not just performing arithmetic expressions, but serving as correlated subqueries. I've fixed the performance problem by simply removing the UDF's and rewriting with correlated subqueries, more complex joins, etc.
I suppose if I only had experience with MySQL I would simply accept this as a reality, adjust my use of UDF's and move on. But prior to working with MySQL I worked for 5+ years on SQL Server. I built a billing system that processed much larger data sets and relied very heavily on both scalar and table-value user-defined functions. Those UDF's also performed queries (i.e. not just arithmetic operations). I didn't experience this sort of performance penalty when using user-defined functions on SQL Server.
What I'm wondering is whether there's anyone here who knows SQL Server vs. MySQL internals well enough to confirm or explain away my current theory as to the cause for this performance difference in UDF's on the two systems. My theory is that SQL Server's optimizer evaluates UDF's differently than MySQL's. Perhaps it's because the table engines are decoupled in MySQL? Or maybe the use of UDF's on SQL Server is more prevalent and the MySQL engine's optimizer simply hasn't evolved as far yet? What I'm thinking is that maybe the SQL Server optimizer treats included UDF's as part of the surrounding query (when possible) and then optimizes it along with the rest of the query? Maybe I'm way off the mark here, but I just never saw this kind of performance hit for using UDF's on SQL Server.
Any light others can shed on this issue will be appreciated.