I'm the author of the DecInt (Decimal Integer) library so I'll make a few comments.
The DecInt library was specifically designed to work with very large integers that needed to be converted to decimal format. The problem with converting to decimal format is that most arbitrary-precision libraries store values in binary. This is fastest and most efficient for utilizing memory but converting from binary to decimal is usually slow. Python's binary to decimal conversion uses an O(n^2) algorithm and gets slow very quickly.
DecInt uses a large decimal radix (usually 10^250) and stores the very large number in blocks of 250 digits. Converting a very large number to decimal format now runs in O(n).
Naive, or grade school, multiplication has a running time of O(n^2). Python uses Karatsuba multiplication which has running time of O(n^1.585). DecInt uses a combination of Karatsuba, Toom-Cook, and Nussbaumer convolution to get a running time of O(n*ln(n)).
Even though DecInt has much higher overhead, the combination of O(n*ln(n)) multiplication and O(n) conversion will eventually be faster than Python's O(n^1.585) multiplication and O(n^2) conversion.
Since most computations don't require every result to be displayed in decimal format, almost every arbitrary-precision library uses binary since that makes the computations easier. DecInt targets a very small niche. For large enough numbers, DecInt will be faster for multiplication and division than native Python. But if you are after pure performance, a library like GMPY will be the fastest.
I'm glad you found DecInt helpful.