I'm running into an odd bug using datetime fields in SQL Server 2005. The datetime field shows up with millisecond-level accuracy, but it looks like the milliseconds are not always used. Here's my test query:
SELECT col1, YEAR(col1) AS yr, MONTH(col1) AS mn, DAY(col1) AS dy
FROM mytable
WHERE col1 >= '2009-12-31 00:00:00.0' AND col1 <= '2009-12-31 23:59:59.999'
ORDER BY col1
In my results I get:
col1 | yr | mn | dy ----------------------------+------+----+---- 2009-12-31 00:00:00:00.000 | 2009 | 12 | 31 2010-01-01 00:00:00:00.000 | 2010 | 1 | 1
The problem is that I got the 2010-01-01 date, even though that shouldn't be less than or equal to "2009-12-31 23:59:59.999". But if I change the query to use "2009-12-31 23:59:59.998" it works OK (no 2010 datetimes are returned).
Is this a bug, or is this just how SQL Server works? If this is how it works, is there some reason for it? I ran into this migrating some queries from MySQL, where this works as expected (even though MySQL doesn't even store the milliseconds!).