views:

763

answers:

7

This is a duplicate of http://stackoverflow.com/questions/277521/how-to-identify-the-file-content-is-in-ascii-or-binary, but since I don't have the rep, I can't edit that question to improve it.

Informally, most of us understand that there are 'binary' files (object files, images, movies, executables, proprietary document formats, etc) and 'text' files (source code, XML files, HTML files, email, etc).

In general, you need to know the contents of a file to be able to do anything useful with it, and form that point of view if the encoding is 'binary' or 'text', it doesn't really matter. And of course files just store bytes of data so they are all 'binary' and 'text' doesn't mean anything without knowing the encoding. And yet, it is still useful to talk about 'binary' and 'text' files, but to avoid offending anyone with this imprecise definition, I will continue to use 'scare' quotes.

However, there are various tools that work on a wide range of files, and in practical terms, you want to do something different based on whether the file is 'text' or 'binary'. An example of this is any tool that outputs data on the console. Plain 'text' will look fine, and is useful. 'binary' data messes up your terminal, and is generally not useful to look at. GNU grep at least uses this distinction when determining if it should output matches to the console.

So, the question is, how do you tell if a file is 'text' or 'binary'? And to restrict is further, how do you tell on a Linux like file-system? I am not aware of any filesystem meta-data that indicates the 'type' of a file, so the question further becomes, by inspecting the content of a file, how do I tell if it is 'text' or 'binary'? And for simplicity, lets restrict 'text' to mean characters which are printable on the user's console. And in particular how would you implement this? (I thought this was implied on this site, but I guess it is helpful, in general, to be pointed at existing code that does this, I should have specified), I'm not really after what existing programs can I use to do this.

+3  A: 

Well, if you are just inspecting the entire file, see if every character is printable with isprint(c). It gets a little more complicated for Unicode.

To distinguish a unicode text file, MSDN offers some great advice as to what to do.

The gist of it is to first inspect up to the first four bytes:

EF BB BF     UTF-8 
FF FE        UTF-16, little endian 
FE FF        UTF-16, big endian 
FF FE 00 00  UTF-32, little endian 
00 00 FE FF  UTF-32, big-endian

That will tell you the encoding. Then, you'd want to use iswprint(c) for the rest of the characters in the text file. For UTF-8 and UTF-16, you need to parse the data manually since a single character can be represented by a variable number of bytes. Also, if you're really anal, you'll want to use the locale variant of iswprint if that's available on your platform.

MSN
Does only work for files that use this rule.
Georg
Well if it doesn't follow those rules then it really isn't a text file. Except for mbcs, but that's an entirely different story.
MSN
+2  A: 

Most programs that try to tell the difference use a heuristic, such as examining the first n bytes of the file and seeing if those bytes all qualify as 'text' or not (i.e., do they all fall within the range of printable ASCII charcters). For finer distiction there's always the 'file' command on UNIX-like systems.

dwc
+10  A: 

You can use the "file" command. It does a bunch of tests on the file (man file) to decide if it's binary or text. You can look at/borrow its source code if you need to do that from C.

file README
README: ASCII English text, with very long lines

file /bin/bash
/bin/bash: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), stripped
definitely the answer I was going to give...
bjeanes
+1 If it's a Linux system, file is going to have much better heuristics than anything you'll build yourself.
Adam Lassek
Yeah, if file is available, it is going to be the best tool for the job. No question! Also the 'file -I' is a neat trick. I hadn't thought of shelling out for my particular problem, however I don't think I could cop the performance overhead. Thanks!
benno
A: 

You can determine the MIME type of the file (file -i on Linux, file -I (capital i) on Mac OS X). If it starts with text/, it's text, otherwise binary. The only exception are XML applications. You can match those by looking for +xml at the end of the file type.

phihag
I think that should be "file -I" (upper case). At least according to my tests and man page.
benno
Just looked it up, lower case is correct in Debian and gentoo Linux. Their file is ftp://ftp.astron.com/pub/file/file-5.00.tar.gz (or a different version). -I(upper) is an option in neither one.
phihag
Huh, weird. The version on OS X (4.17) uses -I (upper) and the one on my Linux boxes (4.24) uses -i (lower). How bizzare! I wonder if it is an OS X-ism, or the authors simply changed the interface in between point release.
benno
+2  A: 

One simple check is if it has \0 characters. Text files don't have them.

Georg
+3  A: 

Our software reads a number of binary file formats as well as text files.

We first look at the first few bytes for a magic number which we recognize. If we do not recognize the magic number of any of the binary types we read, then we look at up to the first 2K bytes of the file to see whether it appears to be a UTF-8, UTF-16 or a text file encoded in the current code page of the host operating system. If it passes none of these tests, we assume that it is not a file we can deal with and throw an appropriate exception.

Joe Erickson
+1  A: 

As previously stated *nix operating systems have this ability within the file command. This command uses a configuration file that defines magic numbers contained within many popular file structures.

This file, called magic was historically stored in /etc, although this may be in /usr/share on some distributions. The magic file defines offsets of values known to exist within the file and can then examine these locations to determine the type of the file.

The structure and description of the magic file can be found by consulting the relevant manual page (man magic)

As for an implementation, well that can be found within file.c itself, however the relevant portion of the file command that determines whether it is readable text or not is the following

/* Make sure we are dealing with ascii text before looking for tokens */
    for (i = 0; i < nbytes - 1; i++) {
     if (!isascii(buf[i]) ||
         (iscntrl(buf[i]) && !isspace(buf[i]) &&
          buf[i] != '\b' && buf[i] != '\032' && buf[i] != '\033'
         )
        )
      return 0; /* not all ASCII */
    }
Steve Weet