views:

5966

answers:

6

I'm writing a financial application in C# where performance (i.e. speed) is critical. Because it's a financial app I have to use the Decimal datatype intensively.

I've optimized the code as much as I could with the help of a profiler. Before using Decimal, everything was done with the Double datatype and the speed was several times faster. However, Double is not an option because of its binary nature, causing a lot of precision errors over the course of multiple operations.

Is there any decimal library that I can interface with C# that could give me a performance improvement over the native Decimal datatype in .NET?

Based on the answers I already got, I noticed I was not clear enough, so here are some additional details:

  • The app has to be as fast as it can possibly go (i.e. as fast as it was when using Double instead of Decimal would be a dream). Double was about 15x faster than Decimal, as the operations are hardware based.
  • The hardware is already top-notch (I'm running on a Dual Xenon Quad-Core) and the application uses threads, so CPU utilization is always 100% on the machine. Additionally, the app is running in 64bit mode, which gives it a mensurable performance advantage over 32bit.
  • I've optimized past the point of sanity (more than one month and a half optimizing; believe it or not, it now takes approx. 1/5000 of what it took to do the same calculations I used as a reference initially); this optimization involved everything: string processing, I/O, database access and indexes, memory, loops, changing the way some things were made, and even using "switch" over "if" everywhere it made a difference. The profiler is now clearly showing that the remaining performance culprit is on the Decimal datatype operators. Nothing else is adding up a considerable amount of time.
  • You have to believe me here: I've gone as far as I could possibly go in the realm of C#.NET to optimize the application, and I'm really amazed at its current performance. I'm now looking for a good idea in order to improve Decimal performance to something close to Double. I know it's only a dream, but just wanted to check I thought of everything possible. :)

Thanks!

+7  A: 

The problem is basically that double/float are supported in hardware, while Decimal and the like are not. I.e. you have to choose between speed + limited precision and greater precision + poorer performance.

Brian Rasmussen
+4  A: 

You say it needs to be fast, but do you have concrete speed requirements? If not, you may well optimise past the point of sanity :)

As a friend sitting next to me has just suggested, can you upgrade your hardware instead? That's likely to be cheaper than rewriting code.

The most obvious option is to use integers instead of decimals - where one "unit" is something like "a thousandth of a cent" (or whatever you want - you get the idea). Whether that's feasible or not will depend on the operations you're performing on the decimal values to start with. You'll need to be very careful when handling this - it's easy to make mistakes (at least if you're like me).

Did the profiler show particular hotspots in your application that you could optimise individually? For instance, if you need to do a lot of calculations in one small area of code, you could convert from decimal to an integer format, do the calculations and then convert back. That could keep the API in terms of decimals for the bulk of the code, which may well make it easier to maintain. However, if you don't have pronounced hotspots, that may not be feasible.

+1 for profiling and telling us that speed is a definite requirement, btw :)

Jon Skeet
@Downvoter: Care to comment?
Jon Skeet
+21  A: 

you can use the long datatype. Sure, you won't be able to store fractions in there, but if you code your app to store pennies instead of pounds, you'll be ok. Accuracy is 100% for long datatypes, and unless you're working with vast numbers (use a 64-bit long type) you'll be ok.

If you can't mandate storing pennies, then wrap an integer in a class and use that.

gbjbaanb
I agree. Use a 64-bit machine and long's. Plus, it came to my mind - I think .NET had some tools to generate more machine-specific code. I believe it was called ngen. Perhaps there is performance to be gained there...
Vilx-
This is definitely the way to go. Use long or int, and store pennies, or fractions of pennies depending on what precision you require. Many operations will now be as fast or faster than using doubles.
Chris
Wrapping it in a class will incur heap overheads, and even a struct will slow things down, due to operator overloading or function calls being required, so raw long/int storage is best for performance.
Chris
That's a really interesting idea. Normally financial applications need to be more concerned about small numbers than the other way around. I really like this solution. +1
Trap
You don't need to store the fractions - just keep a fixed scaling factor "in mind." For example, your type stores 1/1000 cent, then just divide your result at the end by this factor (100,000) to get it in dollars.
ILoveFortran
This is actually what the decimal type does, except the scaling factor is variable and the type uses an 96 bit representation for the digits (and 128 bit total); that's why it's slower than the suggested version using long (64 bit). (These comment fields are way too short.)
ILoveFortran
@Vilx: Ngen will not speed up a program. All Ngen will do is speed up how fast the program starts up.
Brian
Using long could be really hard on the existing logic. Simply saying "store pennies" may work for simple accounting, but not necessarily for compound interest calculations and other non-integer formulas. Who knows what sequence of calculations he's doing?
Nosredna
you just have to remember to store the smallest value you're prepared to track. Even in Decimal types this happens, only they try to give you far more precision than you may need. Decimal types still run out of precision eventually.
gbjbaanb
+1  A: 

I cannot give a comment or vote down yet since I just started on stack overflow. My comment on alexsmart (posted 23 Dec 2008 12:31) is that the expression Round(n/precision, precision), where n is int and precisions is long will not do what he thinks:

1) n/precision will return an integer-division, i.e. it will already be rounded but you won't be able to use any decimals. The rounding behavior is also different from Math.Round(...).

2) The code "return Math.Round(n/precision, precision).ToString()" does not compile due to an ambiguity between Math.Round(double, int) and Math.Round(decimal, int). You will have to cast to decimal (not double since it is a financial app) and therefore can as well go with decimal in the first place.

3) n/precision, where precision is 4 will not truncate to four decimals but divide by 4. E.g., Math.Round( (decimal) (1234567/4), 4) returns 308641. (1234567/4 = 308641.75), while what you probably wanted to to is get 1235000 (rounded to a precision of 4 digits up from the trailing 567). Note that Math.Round allows to round to a fixed point, not a fixed precision.

Update: I can add comments now but there is not enough space to put this one into the comment area.

ILoveFortran
+3  A: 

What about MMX/SSE/SSE2?

i think it will help... so... decimal is 128bit datatype and SSE2 is 128bit too... and it can add, sub, div, mul decimal in 1 CPU tick...

you can write DLL for SSE2 using VC++ and then use that DLL in your application

e.g //you can do something like this

VC++

#include <emmintrin.h>
#include <tmmintrin.h>

extern "C" DllExport __int32* sse2_add(__int32* arr1, __int32* arr2);

extern "C" DllExport __int32* sse2_add(__int32* arr1, __int32* arr2)
{
    __m128i mi1 = _mm_setr_epi32(arr1[0], arr1[1], arr1[2], arr1[3]);
    __m128i mi2 = _mm_setr_epi32(arr2[0], arr2[1], arr2[2], arr2[3]);

    __m128i mi3 = _mm_add_epi32(mi1, mi2);
    __int32 rarr[4] = { mi3.m128i_i32[0], mi3.m128i_i32[1], mi3.m128i_i32[2], mi3.m128i_i32[3] };
    return rarr;
}

C#

[DllImport("sse2.dll")]
private unsafe static extern int[] sse2_add(int[] arr1, int[] arr2);

public unsafe static decimal addDec(decimal d1, decimal d2)
{
    int[] arr1 = decimal.GetBits(d1);
    int[] arr2 = decimal.GetBits(d2);

    int[] resultArr = sse2_add(arr1, arr2);

    return new decimal(resultArr);
}
A: 

I don't think that SSE2 instructions could easy work with .NET Decimal values. .NET Decimal data type is 128bit decimal floating point type http://en.wikipedia.org/wiki/Decimal128_floating-point_format, SSE2 instructions work with 128bit integer types.

Serge Shandar