A raw socket allows you to communicate with lower level protocols, like Ethernet, IP etc. Yes, going lower can give you some advantages, however you have to balance that with what you are losing.
In this case, you mention that the server is written to use Udp protocol, so on the wire, the communications has to be Udp. Now, if you go to use a raw socket, you will have to make sure to send your application data encapsulated in a Udp packet. You will also need to write code to make sure you obey the Udp protocol and state machine so that to the server, your client appears as just another Udp client. Doing all this requires writing a lot of code, and has some downsides of increased maintenance, increased cost to get it working correctly, etc.
I did not fully read the paper you linked above, but the question to ask yourself is, can you get the gains given in that research paper and replicate them for your scenario?
In my opinion, you should first try to figure out why your client is so slow. What are your requirements? Do you have any metrics as to what constitutes a good, fast client? If I were you, I would first measure the current implementation keeping in mind some metrics that are useful for the scenario, for eg, Bytes/sec transferred, etc. Then I would profile the client to see where it is spending too much time, and try to see if I can reduce the overhead and make it much faster.
To summarize, look for savings in the top of the stack (i.e in your application) before going down the stack. If your app is not written well, then no matter how low you go, you will not see the perf gains you expect.