I am doing a simple AES encryption in Java:
Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.ENCRYPT_MODE, getAES128SecretKey());
byte[] encrypted = cipher.doFinal(input);
The output is converted to hex and stored in a database.
A later process (one written in C/C++) comes along and reads the hex, converts it to bytes and decrypts it.
The issue is apparently the C implementation correctly decrypts the text, but also keeps extra bytes at the end that are unnecessary.
For example (not real values):
Java: encrypt("eric") -> 1234567890FFFFFF1234567890FFFFFF (hex)
Java: decrypt("1234567890FFFFFF1234567890FFFFFF") -> "eric"
C: decrypt("1234567890FFFFFF1234567890FFFFFF") -> "eric XXXX XXXX XXXX"
I do not own the C decryption algorithm and the party using it has suggested I append a null terminator character '\0'
to the java bytes before encryption. My question is, would that work and should I even entertain that idea?
Reading the first answer to the question (while "unaccepted", sounds correct to me) Padding error when using aes encryption in java and decryption in c seems to be the correct way to go about this.
However, what if the string is encrypted in C and decrypted in C - will the C encrypted string now have issues whent decrypted in C and removing the padding as if it were a java encrypted string? Or would this be a non-issue?