You're basically out of luck, unless you somehow change your requirements.
First, specifically on Unix systems, there's nothing to stop multiple processes from writing to the same files. On a SINGLE SYSTEM, this won't be a problem whatsoever, you'll just have a typical race condition should two or more writes conflict over the same space in the file as to which will actually get written. Since it's on a single system, this had perfect resolution, at the byte level.
So, the game in terms of having multiple processes writing to the same file is how do those processes coordinate? How to they ensure that they don't walk on each other. In, again, Unix, there is an OS based locking mechanism that can be used to prevent that, but typically most systems implement a central server to and coordinate all of their write through that system, and it then writes to the disk while mitigating and handling any conflicts.
Your problem is two fold.
One, you're suggesting that the independent log processes will not cooperate, that they will not share information and coordinate their writes to the volume. That throws a wrench (a big wrench) in to the works right there.
Second, you propose having not only multiple processes write to the same volume, but that the volume they are writing to is shared over a SAN. That's another wrench.
Unlike NFS, SANs don't support "file systems". Rather they support "storage". Basically block level devices. SANs, once you get passed a bunch of volume management shenanigans, are actually pretty "stupid" from the OSs point of view.
I'm pretty sure you can actually have a volume mounted on multiple machines, but I'm not sure more than one can actually WRITE to the device. There are good reasons for this.
Simply, SANs are block level storage. A block being, say, 4K bytes. That's the "atomic" unit of work for the SAN. Want to change a single byte of data? Read a 4K block from the SAN, change your byte, and write the 4k block back.
If you have several machines thinking that they have "universal" access to the SAN storage, and are treating it as a file system, you have a corrupted, ruined file system. It's that simple. The machines will write what they think the blocks should look like while the other machines and smashing it with their local version. Disaster. Ruin. Not happy.
Even getting one machine to write to a SAN while another reads from it is tricky. It's also slow, as the reader can make few assumptions about the contents of the disk, so it needs to read, and re-read blocks (it can't cache anything, like file system TOCs, etc, as, well, they're changing behind it's back due to the activity of the writer -- so, read it again...and again...).
Things like NFS "solve" this problem because you no longer work with raw storage. Rather you work with an actual filesystem.
Finally, there's nothing wrong with having independent log files being streamed out from your servers. They can still be queried. You simply have to repeat the queries and consolidate the results.
If you have 5 machines streaming, and you want "all activity between 12:00pm and 12:05pm", then make 5 queries, one to each log store, and consolidate the results. As for how to efficiently query your log data, that's an indexing problem, and not insurmountable depending on how you query. If you query by time, then create files by time (every minute, every hour, whatever), and scan them. If your system is "read rarely", this isn't a big deal. If you need more sophisticated indexing, then you'll need to come up with something else.
You could use a database to write the files, and indexes, but I doubt you'll find many that enjoy reading from files that they don't control, or that change underneath them.
CouchDB might work, or something similar, because of its specific crash resistant, always consistent database format. It's datafile is always readable by a database instance. That could be an option for you.
But I would still do multiple queries and merge them.