If you're transcoding by converting the original MP3 to an uncompressed format (like WAV) and then re-encoding to MP3 at the higher bitrate, then it would be impossible to determine the original file's bitrate given only the converted file. I suppose this process might produce some incredibly subtle audio artifacts that could be analyzed statistically, but this would be a pretty herculean effort, in my opinion, and unlikely to succeed.
I'm not sure if it's even possible to up-rate an MP3 without decoding and reencoding, but even if it is possible, the process still would not preserve the original bitrate in the new file. Again, this process may produce some kind of weird, measurable artifacts that might hint at the original bitrate, but I doubt it.
Update: now that I think about it, it might be possible somehow to detect this, although I have no idea how to do it programmatically. The human ear can make distinctions like this (some of them, anyway): I can tell the difference clearly between 128k MP3s and 196k MP3s, so discriminating between 96k and 320k would be a piece of cake. A 96k MP3 that had been upcoded would still have all the audio artifacts present in the 96k version (plus new ones, unfortunately).
I don't know how you would go about determining this with code, however. If I had to make this work, I'd train pigeons to do it (and I'm not kidding about that).