I have a program that need to handle byte array source. Originally the program work fine when the byte array size is 3000 byte. Now data size increase and the array size need to be changed from 3000 to 30000 (10 times).
I make a sample benchmark program to test looping time. I suppose required CPU time should be increase linearly according to the array size, but the benchmark program show that process 30000 bytes require much more than 35 times compared to process 3000 bytes.
Here is my bench mark program. Could the program improve so that it use around 10 times CPU time only?
public static void main(String args[])
int TestArraySize=30000;
String strFinalMessage="";
// create a dummy byte array
byte[] bytearrayMessageContent = new byte[TestArraySize];
for (int i=0; i<TestArraySize; i++) {
// fill character A-J into the dummy array
bytearrayMessageContent[i] = (byte) (i%10+65);
}
System.out.println(bytearrayMessageContent.length);
// time start time
long lngCurrentTime = System.currentTimeMillis();
// process the byte array
int intTHMessageLenAdj = TestArraySize;
try {
InputStream input = new ByteArrayInputStream(bytearrayMessageContent);
while (intTHMessageLenAdj > 0) {
// get random length of bytes to process
int RandomLength = getNextRandom();
if (RandomLength > intTHMessageLenAdj) {
RandomLength = intTHMessageLenAdj;
}
// get the bytes to be process in a byte array and process it
byte[] bytearrayMsgTrunk = new byte[RandomLength];
input.read(bytearrayMsgTrunk);
// do some logic here
strFinalMessage += new String(bytearrayMsgTrunk) + "||";
// repeat looping until all bytes are read
intTHMessageLenAdj -= RandomLength;
}
input.close();
} catch (Exception ex) {
ex.printStackTrace();
}
// time end time
lngCurrentTime = System.currentTimeMillis() - lngCurrentTime;
//System.out.println(strFinalMessage);
System.out.println(lngCurrentTime);
}
public static int getNextRandom() {
// info is arround 4 bytes size
Random random = new Random();
return random.nextInt(8);
}