views:

6618

answers:

15

I have a string that I want to use as a filename, so I want to remove all characters that wouldn't be allowed in filenames, using Python.

I'd rather be strict than otherwise, so let's say I want to retain only letters, digits, and a small set of other characters like "_-.() ". What's the most elegant solution?

The filename needs to be valid on multiple operating systems (Windows, Linux and Mac OS) - it's an MP3 file in my library with the song title as the filename, and is shared and backed up between 3 machines.

A: 

Maybe the most elegant way would be to use a regex ?

Geo
Such as the one posted right above this comment.
gx
+18  A: 

This whitelist approach (ie, allowing only the chars present in valid_chars) will work if there aren't limits on the formatting of the files or combination of valid chars that are illegal (like ".."), for example, what you say would allow a filename named " . txt" which I think is not valid on Windows. As this is the most simple approach I'd try to remove whitespace from the valid_chars and prepend a known valid string in case of error, any other approach will have to know about what is allowed where to cope with Windows file naming limitations and thus be a lot more complex.

>>> import string
>>> valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
>>> valid_chars
'-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
>>> filename = "This Is a (valid) - filename%$&$ .txt"
>>> ''.join(c for c in filename if c in valid_chars)
'This Is a (valid) - filename .txt'
Vinko Vrsalovic
`string.letters` depends on current locale, therefore It might contain unsafe symbols for filename.
J.F. Sebastian
`valid_chars = frozenset(valid_chars)` wouldn't hurt. It is 1.5 times faster if applied to allchars.
J.F. Sebastian
+1  A: 

You could use the re.sub() method to replace anything not "filelike". But in effect, every character could be valid; so there are no prebuilt functions (I believe), to get it done.

import re

str = "File!name?.txt"
f = open(os.path.join("/tmp", re.sub('[^a-zA-Z0-9_-.() ]+', '', str))

Would result in a filehandle to /tmp/filename.txt.

gx
'(?i)' is unnecessary here.
J.F. Sebastian
True, sorry, habits.
gx
+11  A: 

What is the reason to use the strings as file names? If human readability is not a factor I would go with base64 module which can produce file system safe strings. It won't be readable but you won't have to deal with collisions and it is reversible.

import base64
file_name_string = base64.urlsafe_b64encode(your_string)

Update: Changed based on Matthew comment.

Igal Serban
Obviously this is the best answer if that is the case.
Warning! base64 encoding by default includes the "/" character as valid output which isn't valid in filenames on a lot of systems.Instead use base64.urlsafe_b64encode(your_string)
Matthew
+3  A: 

You can use list comprehension together with the string methods.

>>> s
'foo-bar#baz?qux@127/\\9]'
>>> "".join([x for x in s if x.isalpha() or x.isdigit()])
'foobarbazqux1279'
This works great, and is the simplest one yet.
T.K.
+12  A: 

Just to further complicate things, you are not guaranteed to get a valid filename just by removing invalid characters. Since allowed characters differ on different filenames, a conservative approach could end up turning a valid name into an invalid one. You may want to add special handling for the cases where:

  • The string is all invalid characters (leaving you with an empty string)

  • You end up with a string with a special meaning, eg "." or ".."

  • On windows, certain device names are reserved. For instance, you can't create a file named "nul", "nul.txt" (or nul.anything in fact) The reserved names are:

    CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9

You can probably work around these issues by prepending some string to the filenames that can never result in one of these cases, and stripping invalid characters.

Brian
+3  A: 
>>> import string
>>> safechars = '_-.()' + string.digits + string.ascii_letters
>>> allchars = string.maketrans('', '')
>>> deletions = ''.join(set(allchars) - set(safechars))
>>> filename = '#abc.$%.txt'
>>> safe_filename = string.translate(filename, allchars, deletions)
>>> safe_filename
'abc..txt'
>>>

The above code doesn't work for unicode strings. It doesn't handle empty strings, special filenames ('nul', 'con', etc) also.

J.F. Sebastian
+1 for translation tables, it is by far the most efficient method.For the special filenames/empties, a simple pre-condition check will suffice and for extraneous periods that's a simple correction as well.
Christian Witts
While translate is slightly more efficient than a regexp, that time will most likely be dwarfed if you actually try to open the file, which no doubt you are intending to do. Thus I prefer more a more readable regexp solution than the mess above
nosatalian
+1  A: 

Keep in mind, there are actually no restrictions on filenames on Unix systems other than

  • It may not contain \0
  • It may not contain /

Everything else is fair game.

$ touch "
> even multiline
> haha
> ^[[31m red ^[[0m
> evil"
$ ls -la 
-rw-r--r--       0 Nov 17 23:39 ?even multiline?haha??[31m red ?[0m?evil
$ ls -lab
-rw-r--r--       0 Nov 17 23:39 \neven\ multiline\nhaha\n\033[31m\ red\ \033[0m\nevil
$ perl -e 'for my $i ( glob(q{./*even*}) ){ print $i; } '
./
even multiline
haha
 red 
evil

Yes, i just stored ANSI Colour Codes in a file name and had them take effect.

For entertainment, put a BEL character in a directory name and watch the fun that ensues when you CD into it ;)

Kent Fredric
+1  A: 

Why not just wrap the "osopen" with a try/except and let the underlying OS sort out whther the file is valid?

Much less work and valid no matter which OS you use.

James Anderson
+9  A: 

You can look at the Django framework for how they create a "slug" from arbitrary text. A slug is URL- and filename- friendly.

Their template/defaultfilters.py (at around line 183) defines a function, slugify, that's probably the gold standard for this kind of thing. Essentially, their code is the following.

def slugify(value):
    """
    Normalizes string, converts to lowercase, removes non-alpha characters,
    and converts spaces to hyphens.
    """
    import unicodedata
    value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
    value = unicode(re.sub('[^\w\s-]', '', value).strip().lower())
    ... re.sub('[-\s]+', '-', value)

There's more, but I left it out, since it doesn't address slugification, but escaping.

S.Lott
The last line should be: value = unicode(re.sub('[-\s]+', '-', value))
Joseph Turian
+1  A: 

Another issue that the other comments haven't addressed yet is the empty string, which is obviously not a valid filename. You can also end up with an empty string from stripping too many characters.

What with the Windows reserved filenames and issues with dots, the safest answer to the question “how do I normalise a valid filename from arbitrary user input?” is “don't even bother try”: if you can find any other way to avoid it (eg. using integer primary keys from a database as filenames), do that.

If you must, and you really need to allow spaces and ‘.’ for file extensions as part of the name, try something like:

import re
badchars= re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$')
badnames= re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)')

def makeName(s):
    name= badchars.sub('_', s)
    if badnames.match(name):
        name= '_'+name
    return name

Even this can't be guaranteed right especially on unexpected OSs — for example RISC OS hates spaces and uses ‘.’ as a directory separator.

bobince
A: 

Shouldn't this be built into the os.path module?

endolith
+1  A: 

Though you have to be careful. It is not clearly said in your intro, if you are looking only at latine language. Some words can become meaningless or another meaning if you sanitize them with ascii characters only.

imagine you have "forêt poésie" (forest poetry), your sanitization might give "fort-posie" (strong + something meaningless)

Worse if you have to deal with chinese characters.

"下北沢" your system might end up doing "---" which is doomed to fail after a while and not very helpful. So if you deal with only files I would encourage to either call them a generic chain that you control or to keep the characters as it is. For URIs, about the same.

karlcow
+2  A: 

This is the solution I ultimately used:

import unicodedata

validFilenameChars = "-_.() %s%s" % (string.ascii_letters, string.digits)

def removeDisallowedFilenameChars(filename):
    cleanedFilename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore')
    return ''.join(c for c in cleanedFilename if c in validFilenameChars)

The unicodedata.normalize call replaces accented characters with the unaccented equivalent, which is better than simply stripping them out. After that all disallowed characters are removed.

My solution doesn't prepend a known string to avoid possible disallowed filenames, because I know they can't occur given my particular filename format. A more general solution would need to do so.

Sophie Tatham
you should be able to use uuid.uuid4() for your unique prefix
slf
too verbose ahh
Claudiu
A: 

The bobcat project contains a python module that does just this.

It's not completely robust, see this post and this reply.

So, as noted: base64 encoding is probably a better idea if readability doesn't matter.

wires