Hi
I want to increase a decimal's smallest fractional part with one so that for example
decimal d = 0.01
d++
d == 0.02
or
decimal d = 0.000012349
d++
d == 0.000012350
How do i do this?
Hi
I want to increase a decimal's smallest fractional part with one so that for example
decimal d = 0.01
d++
d == 0.02
or
decimal d = 0.000012349
d++
d == 0.000012350
How do i do this?
What about this:
static class DecimalExt {
public static decimal PlusPlus(this decimal value) {
decimal test = 1M;
while (0 != value % test){
test /= 10;
}
return value + test;
}
}
class Program {
public static void Main(params string[] args) {
decimal x = 3.14M;
x = x.PlusPlus(); // now is 3.15
}
}
I used an extension method here; you cannot redefine the ++ operator for the decimal type.
The decimal type (.NET 2.0 and later) retains significant trailing zeroes that are the result of a calculation or as a result of parsing a string. E.g. 1.2 * 0.5 = 0.60 (multiplying two numbers accurate to one decimal place gives a result accurate to 2 decimal places, even when the second decimal place is zero):
decimal result = 1.2M * 0.5M;
Console.WriteLine(result.ToString()); // outputs 0.60
The following assumes you want to consider all significant digits in your decimal value, i.e.
decimal d = 1.2349M; // original 1.2349;
d = IncrementLastDigit(d); // result is 1.2350;
d = IncrementLastDigit(d); // result is 1.2351; (not 1.2360).
However if you want to first remove trailing zeroes, you can do so, e.g. using the technique in this answer.
There's nothing built-in to do this. You'll have to do it yourself by (a) determining how many digits there are after the decimal, then (b) adding the appropriate amount.
To determine how many digits there are after the decimal, you can either format as a string, then count them, or more efficiently, call decimal.GetBits(), the result of which is an array of four integers that contains the scaling factor in bits 16-23 of the fourth integer.
Once you have that you can easily calculate the required value to add to your decimal value.
Here's an implementation that uses GetBits, which "increments" away from zero for negative numbers IncrementLastDigit(-1.234M) => -1.235M.
static decimal IncrementLastDigit(decimal value)
{
int[] bits1 = decimal.GetBits(value);
int saved = bits1[3];
bits1[3] = 0; // Set scaling to 0, remove sign
int[] bits2 = decimal.GetBits(new decimal(bits1) + 1);
bits2[3] = saved; // Restore original scaling and sign
return new decimal(bits2);
}
Or here's an alternative (perhaps slightly more elegant):
static decimal GetScaledOne(decimal value)
{
int[] bits = decimal.GetBits(value);
// Generate a value +1, scaled using the same scaling factor as the input value
bits[0] = 1;
bits[1] = 0;
bits[2] = 0;
bits[3] = bits[3] & 0x00FF0000;
return new decimal(bits);
}
static decimal IncrementLastDigit(decimal value)
{
return value < 0 ? value - GetScaledOne(value) : value + GetScaledOne(value);
}
This would do the trick:
decimal d = 0.01M;
int incr = 1;
int pos = d.ToString().IndexOf('.');
int len = d.ToString().Length - pos - 1;
if (pos > 0)
{
double val = Convert.ToDouble(d);
val = Math.Round(val * Math.Pow(10, len) + incr) / Math.Pow(10, len);
d = Convert.ToDecimal(val);
}
else
d += incr;
return d;
I've came up with a new solution that is different from Joe's it should result in a minuscule performance increase.
public static decimal IncrementLowestDigit(this decimal value, int amount)
{
int[] bits = decimal.GetBits(value);
if (bits[0] < 0 && amount + bits[0] >= 0)
{
bits[1]++;
if (bits[1] == 0)
{
bits[2]++;
}
}
bits[0] += amount;
return new decimal(bits);
}
Test
I tested my results with Joe's methods.
private static void Test(int l, int m, int h, int e, int times)
{
decimal a = new decimal(new[] { l, m, h, e });
decimal b = a.IncrementLowestDigit(times);
decimal c = IncrementLastDigit(a, times);
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.WriteLine();
}
Test(0, 0, 0, 0x00000000, 1);
Test(0, 0, 0, 0x00000000, 2);
Test(0, 0, 0, 0x00010000, 1);
Test(0, 0, 0, 0x00010000, 2);
Test(0, 0, 0, 0x00020000, 1);
Test(0, 0, 0, 0x00020000, 2);
Test(-1, 0, 0, 0x00000000, 1);
Test(-1, 0, 0, 0x00000000, 2);
Test(-1, 0, 0, 0x00010000, 1);
Test(-1, 0, 0, 0x00010000, 2);
Test(-1, 0, 0, 0x00020000, 1);
Test(-1, 0, 0, 0x00020000, 2);
Test(-2, 0, 0, 0x00000000, 1);
Test(-2, 0, 0, 0x00000000, 2);
Test(-2, 0, 0, 0x00010000, 1);
Test(-2, 0, 0, 0x00010000, 2);
Test(-2, 0, 0, 0x00020000, 1);
Test(-2, 0, 0, 0x00020000, 2);
Test(-2, 0, 0, 0x00000000, 3);
Test(0, 1, 0, 0x00000000, 1);
Test(0, 1, 0, 0x00000000, 2);
Test(0, 1, 0, 0x00010000, 1);
Test(0, 1, 0, 0x00010000, 2);
Test(0, 1, 0, 0x00020000, 1);
Test(0, 1, 0, 0x00020000, 2);
Test(-1, 2, 0, 0x00000000, 1);
Test(-1, 2, 0, 0x00000000, 2);
Test(-1, 2, 0, 0x00010000, 1);
Test(-1, 2, 0, 0x00010000, 2);
Test(-1, 2, 0, 0x00020000, 1);
Test(-1, 2, 0, 0x00020000, 2);
Test(-2, 3, 0, 0x00000000, 1);
Test(-2, 3, 0, 0x00000000, 2);
Test(-2, 3, 0, 0x00010000, 1);
Test(-2, 3, 0, 0x00010000, 2);
Test(-2, 3, 0, 0x00020000, 1);
Test(-2, 3, 0, 0x00020000, 2);
Just for Laughs
I did a performance test with 10 million iterations on a 3 Ghz. Intel chip:
Mine: 11.6 ns
Joe's: 32.1 ns