views:

156

answers:

7

And if so, why? I mean, is a tinyint faster to search than int?

If so, what are the practical differences in performance?

+3  A: 

Theoretically, yes, a tinyint is faster than an int. But good database design and proper indexing has a far more substantial effect on performance, so I always use int for design simplicity.

Robert Harvey
+1  A: 

of course choosing right datatypes always helps in faster execution

take a look in at this article this will surely help you out: http://www.peachpit.com/articles/article.aspx?p=30885&seqNum=7

nik
+6  A: 

Depending on the data types, yes, it does make a difference.

int vs. tinyint wouldn't make a noticeable difference in speed, but it would make a difference in data sizes. Assuming tinyint is 1 byte, versus int being 4, that's 3 bytes saved every row. it adds up after awhile.

Now, if it was int against varchar, then there would be a bit of a drop, as things like sorts would be much faster on integer values than string values.

If it's a comparable type, and you're not very pressed for space, go with the one that's easier and more robust.

Slokun
+2  A: 

Choosing the right data type can improve performance. In a lot of cases the practical difference might not be a lot but a bad choice can definitely have an impact. Imagine using a 1000 character char field instead of a varchar field when you are only going to be storing a string of a few characters. It's a bit of an extreme example but you would definitely be a lot better using a varchar. You would probably never notice a difference in performance between an int and a tinyint. Your overall database design (normalized tables, good indices, etc.) will have a far larger impact.

TLiebe
+3  A: 

I would venture that there are no practical performance differences in that case. Storage space is the more substantial factor, but even then, it's not much difference. The difference is perhaps 2 bytes? After 500,000 rows you've almost used an extra megabyte. Hopefully you aren't pinching megabytes if you are working with that much data.

Fletcher Moore
+1  A: 

The performance consideration all depends on the scale of your model and usage. While the consideration for storage space in these modern times is almost a non-issue, you might need to think about performance:

Database engines tend to store data in chunks called pages. Sql Server has 8k pages, Oracle 2k and MySql 16k page size by default? Not that big for any of these systems. Whenever you perform an operation on a bit of data (the field and row) its entire page is fetched from the db and put into memory. When your data is smaller (tiny int vs. int) then you can fit more individual rows and data items into a page and therefore your likelihood of having to fetch more pages goes down and the overall performance speeds up. So yes, using the smallest possible representation of your data will definitely have an impact on performance because it allows the db engine to be more efficient.

Paul Sasik
+1  A: 

One way it can affect performance is by not requiring you to convert it to the correct type to manipulate the data. This is true when someone uses varchar for instance instead of a datetime datatype and then they have to be converted to do date math. It can also affect performance by giving a smaller record (this why you shouldn't define everything at the max size) which affects how pages are stored and retrieved in the database.

Of course using the correct type of data can also help data integrity; you can't store a date that doesn't exist in a datetime field but you can in varchar field. If you use float instead of int then your values aren't restricted to integer values etc. ANd speaking of float, it is generally bad to use if you intend to do math calulations as you get rounding errors since it is not an exact type.

HLGEM