I guess the variable latencies involved (such as IO) here are the key issue. How fine a measure are you looking for? I am also not sure this is (strictly speaking) related to Java.
In any event, I think you need a comparative operation.
Two distinct machines are writing to a shared device (your SMB) and each creates a file which contains their epoch. Ideally, to minimize latency issues, you would want to obtain this epoch obtained just before you write and then immediately close the file.
The client then compares its own epoch and file timestamp to the server's file and its epoch content. These 4 measures should provide sufficient information to gain insight regarding the relative diff between the two JVM's epoch time.
[Edit to clarify]
For example, the server's tuple is ({epoch, ts}): {S_t, SMB_ts}, and the client {C_S_t, SMB_C_ts}. Lets say you get (funny numbers here) {5000, 4800} and {5100, 5000}. Take the diff between server's timestamp and client's stamp (here 4800 - 5000 => -200), and add to the client's epoch (here 5100+(-200)=>4900). So the client is 100 units behind the server.
[Final edit]: (Note that you have 3 clocks to deal with here: SMB's, servers', and the client's.)