tags:

views:

1720

answers:

4

UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:

For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.

How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:

Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.

What a nuisance!

Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...

...so can it be done?

+3  A: 

According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:

Enables subsequent open operations on a file or device to request delete access.

Otherwise, other processes cannot open the file or device if they request delete access.

If this flag is not specified, but the file or device has been opened for delete access, the function fails.

Note Delete access allows both delete and rename operations.

http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx

So if you're can control your applications you can use this flag.

bialix
+2  A: 

No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.

Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).

Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:

  1. Open a file
  2. Delete it (but keep the file handle)
  3. Use the file any way you like
  4. Close the file

After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).

This design is the foundation of all Unix filesystems. The Windows filesystem works completely different so there is no way to configure it on a high level to achieve the same effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists.

Aaron Digulla
I don't think that's how it works. You can't have two inodes for the same file. Different inodes, different files. Also, you can read whatever open file you want with procfs.
Rob Kennedy
same file => same path. Fixed. Also, what do you mean by your comment about procfs?
Aaron Digulla
Surely what the "average user" is doing is nothing to do with low-level file semantics. To me, this is only an issue to programmers.
MarkR
@Rob Kennedy: Oh, yes you can! They're called symbolic links. ;)
Spoike
@MarkR: It's more safe for the average user when they can't open the same file in two applications (or two instances of the same application).
Aaron Digulla
@Spoike: Not sure; soft/sym links are like normal files; they just contain the path as the data part. So every symlink is a new inode. Even hard links have distinct i-nodes (and a counter in the destination).
Aaron Digulla
+1  A: 

That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.

Rob K
procexp is great for the sysadmin, but not for the developer: I want my stuff to run unattended!
Salim Fadhley
+2  A: 

Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.

Unlocker purports to do a lot more, and provides a helpful list of other tools.

Also deleting on reboot is an option (though this sounds like not what you want)

ShuggyCoUk