There are three libraries mentioned on the Arbitrary Precision Arithmetic page: java.math (containing the mentioned BigDecimal), Apfloat and JScience. I run a little speed check on them which just uses addition and multiplication.
The result is that for a relatively small number of digits BigDecimal is OK (half as fast as the others for 1000 digits), but if you use more digits it is way off - JScience is about 4 times faster. But the clear performance winner is Apfloat. The other libraries seem to use naive multiplication algorithms that take time proportional to the square of the number of digits, but the time of Apfloat seems to grow almost linearly. On 10000 digits it was 4 times as fast as JScience, but on 40000 digits it is 16 times as fast as JScience.
On the other hand: JScience provides EXCELLENT functionality for mathematical problems: matrices, vectors, symbolic algorithms, solution of equation systems and what not. So I'll probably go with JScience and later write a wrapper to integrate Apfloat into the algorithms of JScience - due to the good design this seems easily possible.
CORRECTION: unfortunately the arbitrary precision classes of JScience seem to have a number of bugs in the current release 4.3 and I am not sure how active the development is. I think Apfloat is much better tested.