The simplest and most reliable way to do this is to use the length of the packet that you read from the UDP socket. The javadoc for DatagramSocket.receive(...)
says this:
Receives a datagram packet from this socket. When this method returns, the DatagramPacket's buffer is filled with the data received. The datagram packet also contains the sender's IP address, and the port number on the sender's machine.
This method blocks until a datagram is received. The length field of the datagram packet object contains the length of the received message. If the message is longer than the packet's length, the message is truncated.
If you cannot do that, then the following will allocate a minimum sized String with no unnecessary allocation of temporaries.
byte[] buff = ... // read from socket.
// Find byte offset of first 'non-character' in buff
int i;
for (i = 0; i < buff.length && /* buff[i] represents a character */; i++) { /**/ }
// Allocate String
String res = new String(buff, 0, i, charsetName);
Note that the criterion for determining a non-character is character set and application specific. But probably testing for a zero byte is sufficient.
EDIT
What does the javadoc exactly mean by "The length of the new String is a function of the charset, and hence may not be equal to the length of the subarray."
It is pointing to the fact that for some character encodings (for example UTF-8, UTF-16, JIS, etc) some characters are represented by two or more bytes. So for example, 10 bytes of UTF-8 might represent fewer than 10 characters.