Hi,
I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying.
I will be using some C99 functionality, but trying to conform to ANSI C.
Currently I have the following in my bignum library:
#include <stdint.h>
#if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64)
typedef uint64_t u_w;
typedef uint32_t u_hw;
#define BIGNUM_DIGITS 2048
#define U_HW_BITS 16
#define U_W_BITS 32
#define U_HW_MAX UINT32_MAX
#define U_HW_MIN UINT32_MIN
#define U_W_MAX UINT64_MAX
#define U_W_MIN UINT64_MIN
#else
typedef uint32_t u_w;
typedef uint16_t u_hw;
#define BIGNUM_DIGITS 4096
#define U_HW_BITS 16
#define U_W_BITS 32
#define U_HW_MAX UINT16_MAX
#define U_HW_MIN UINT16_MIN
#define U_W_MAX UINT32_MAX
#define U_W_MIN UINT32_MIN
#endif
typedef struct bn
{
int sign;
int n_digits; // #digits should exclude carry (digits = limbs)
int carry;
u_hw tab[BIGNUM_DIGITS];
} bn;
As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal.
The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.