(background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c)
To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte.
DateTime t; long a, b, c;
t = DateTime.Now;
for (int index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
a = DateTime.Now.Ticks - t.Ticks;
t = DateTime.Now;
for (short index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
b=DateTime.Now.Ticks - t.Ticks;
t = DateTime.Now;
for (byte index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
c=DateTime.Now.Ticks - t.Ticks;
Console.WriteLine(a.ToString());
Console.WriteLine(b.ToString());
Console.WriteLine(c.ToString());
This gives roughly consistent results in the area of...
~950000
~2000000
~1700000
which is in line with what i would expect to see.
However when I try repeating the loops for each data type like this...
t = DateTime.Now;
for (int index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
for (int index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
for (int index = 0; index < 127; index++)
{
Console.WriteLine(index.ToString());
}
a = DateTime.Now.Ticks - t.Ticks;
the numbers are more like...
~4500000
~3100000
~300000
Which I find puzzling. Can anyone offer an explanation?
NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.