views:

1363

answers:

2

I am really used to doing grep -iIr on the Unix shell but I haven't been able to get a PowerShell equivalent yet.

Basically, the above command searches the target folders recursively and ignores binary files because of the "-I" option. This option is also equivalent to the --binary-files=without-match option, which says "treat binary files as not matching the search string"

So far I have been using Get-ChildItems -r | Select-String as my PowerShell grep replacement with the occasional Where-Object added. But I haven't figured out a way to ignore all binary files like the grep -I command does.

How can binary files be filtered or ignored with Powershell?

So for a given path, I only want Select-String to search text files.

EDIT: A few more hours on Google produced this question How to identify the contents of a file is ASCII or Binary. The question says "ASCII" but I believe the writer meant "Text Encoded", like myself.

EDIT: It seems that an isBinary() needs to be written to solve this issue. Probably a C# commandline utility to make it more useful.

EDIT: It seems that what grep is doing is checking for ASCII NUL Byte or UTF-8 Overlong. If those exists, it considers the file binary. This is a single memchr() call.

+13  A: 

On Windows, file extensions are usually good enough:

# all C# and related files (projects, source control metadata, etc)
dir -r -fil *.cs* | ss foo

# exclude the binary types most likely to pollute your development workspace
dir -r -exclude *exe, *dll, *pdb | ss foo

# stick the first three lines in your $profile (refining them over time)
$bins = new-list string
$bins.AddRange( [string[]]@("exe", "dll", "pdb", "png", "mdf", "docx") )
function IsBin([System.IO.FileInfo]$item) { !$bins.Contains($item.extension.ToLower()) }
dir -r | ? { !IsBin($_) } | ss foo

But of course, file extensions are not perfect. Nobody likes typing long lists, and plenty of files are misnamed anyway.

I don't think Unix has any special binary vs text indicators in the filesystem. (Well, VMS did, but I doubt that's the source of your grep habits.) I looked at the implementation of Grep -I, and apparently it's just a quick-n-dirty heuristic based on the first chunk of the file. Turns out that's a strategy I have a bit of experience with. So here's my advice on choosing a heuristic function that is appropriate for Windows text files:

  • Examine at least 1KB of the file. Lots of file formats begin with a header that looks like text but will bust your parser shortly afterward. The way modern hardware works, reading 50 bytes has roughly the same I/O overhead as reading 4KB.
  • If you only care about straight ASCII, exit as soon you see something outside the character range [31-127 plus CR and LF]. You might accidentally exclude some clever ASCII art, but trying to separate those cases from binary junk is nontrivial.
  • If you want to handle Unicode text, let MS libraries handle the dirty work. It's harder than you think. From Powershell you can easily access the IMultiLang2 interface (COM) or Encoding.GetEncoding static method (.NET). Of course, they are still just guessing. Raymond's comments on the Notepad detection algorithm (and the link within to Michael Kaplan) are worth reviewing before deciding exactly how you want to mix & match the platform-provided libraries.
  • If the outcome is important -- ie a flaw will do something worse than just clutter up your grep console -- then don't be afraid to hard-code some file extensions for the sake of accuracy. For example, *.PDF files occasionally have several KB of text at the front despite being a binary format, leading to the notorious bugs linked above. Similarly, if you have a file extension that is likely to contain XML or XML-like data, you might try a detection scheme similar to Visual Studio's HTML editor. (SourceSafe 2005 actually borrows this algorithm for some cases)
  • Whatever else happens, have a reasonable backup plan.

As an example, here's the quick ASCII detector:

function IsAscii([System.IO.FileInfo]$item)
{
    begin 
    { 
        $validList = new-list byte
        $validList.AddRange([byte[]] (10,13) )
        $validList.AddRange([byte[]] (31..127) )
    }

    process
    {
        try 
        {
            $reader = $item.Open([System.IO.FileMode]::Open)
            $bytes = new-object byte[] 1024
            $numRead = $reader.Read($bytes, 0, $bytes.Count)

            for($i=0; $i -lt $numRead; ++$i)
            {
                if (!$validList.Contains($bytes[$i]))
                    { return $false }
            }
            $true
        }
        finally
        {
            if ($reader)
                { $reader.Dispose() }
        }
    }
}

The usage pattern I'm targeting is a where-object clause inserted in the pipeline between "dir" and "ss". There are other ways, depending on your scripting style.

Improving the detection algorithm along one of the suggested paths is left to the reader.

edit: I started replying to your comment in a comment of my own, but it got too long...

Above, I looked at the problem from the POV of whitelisting known-good sequences. In the application I maintained, incorrectly storing a binary as text had far worse consequences than vice versa. The same is true for scenarios where you are choosing which FTP transfer mode to use, or what kind of MIME encoding to send to an email server, etc.

In other scenarios, blacklisting the obviously bogus and allowing everything else to be called text is an equally valid technique. While U+0000 is a valid code point, it's pretty much never found in real world text. Meanwhile, \00 is quite common in structured binary files (namely, whenever a fixed-byte-length field needs padding), so it makes a great simple blacklist. VSS 6.0 used this check alone and did ok.

Aside: *.zip files are a case where checking for \0 is riskier. Unlike most binaries, their structured "header" (footer?) block is at the end, not the beginning. Assuming ideal entropy compression, the chance of no \0 in the first 1KB is (1-1/256)^1024 or about 2%. Luckily, simply scanning the rest of the 4KB cluster NTFS read will drive the risk down to 0.00001% without having to change the algorithm or write another special case.

To exclude invalid UTF-8, add \C0-C1 and \F8-FD and \FE-FF (once you've seeked past the possible BOM) to the blacklist. Very incomplete since you're not actually validating the sequences, but close enough for your purposes. If you want to get any fancier than this, it's time to call one of the platform libraries like IMultiLang2::DetectInputCodepage.

Not sure why \C8 (200 decimal) is on Grep's list. It's not an overlong encoding. For example, the sequence \C8 \80 represents Ȁ (U+0200). Maybe something specific to Unix.

Richard Berg
I would give way more than one upvote for the almost exhaustive completeness of this answer if I could.
Knox
Thanks a lot for the thorough response! I had already ruled on the file extensions method because there are just too many to consider, like you suggested. But I am glad you included your analysis, which was excellent. Your isAscii() function is also very helpful. Since the goal is to detect binary, and treat all types of character encoding the same, I've started to look at an isBinary() method. I had also looked to see how grep did it. Came down to a single 'memchr()' call searching for '\0' or '\200' ( utf-8 overlong? ). Is that what you found? You know why that works by any chance?
kervin
@Richard: `'\200'` is octal 200 aka 0x80 not decimal 200.@kervin: `'\xC0\x80'` would be utf-8 overlong ... in fact there's a rebel UTF-8 that uses that to encode U+0000 so that the rebs can persist in the horrid habit of using `\x00` as a string terminator. But that's nothing to do with grep :-)
John Machin
+2  A: 

Ok, after a few more hours of research I believe I've found my solution. I won't mark this as the answer though.

Pro Windows Powershell had a very similar example. I had completely forgot that I had this excellent reference. Please buy it if you are interested in Powershell. It went into detail on Get-Content and Unicode BOMs.

This Answer to a similar questions was also very helpful with the Unicode identification.

Here is the script. Please let me know if you know of any issues it may have.

# The file to be tested
param ($currFile)

# encoding variable
$encoding = ""

# Get the first 1024 bytes from the file
$byteArray = Get-Content -Path $currFile -Encoding Byte -TotalCount 1024

if( ("{0:X}{1:X}{2:X}" -f $byteArray) -eq "EFBBBF" )
{
    # Test for UTF-8 BOM
    $encoding = "UTF-8"
}
elseif( ("{0:X}{1:X}" -f $byteArray) -eq "FFFE" )
{
    # Test for the UTF-16
    $encoding = "UTF-16"
}
elseif( ("{0:X}{1:X}" -f $byteArray) -eq "FEFF" )
{
    # Test for the UTF-16 Big Endian
    $encoding = "UTF-16 BE"
}
elseif( ("{0:X}{1:X}{2:X}{3:X}" -f $byteArray) -eq "FFFE0000" )
{
    # Test for the UTF-32
    $encoding = "UTF-32"
}
elseif( ("{0:X}{1:X}{2:X}{3:X}" -f $byteArray) -eq "0000FEFF" )
{
    # Test for the UTF-32 Big Endian
    $encoding = "UTF-32 BE"
}

if($encoding)
{
    # File is text encoded
    return $false
}

# So now we're done with Text encodings that commonly have '0's
# in their byte steams.  ASCII may have the NUL or '0' code in
# their streams but that's rare apparently.

# Both GNU Grep and Diff use variations of this heuristic

if( $byteArray -contains 0 )
{
    # Test for binary
    return $true
}

# This should be ASCII encoded 
$encoding = "ASCII"

return $false

Save this script as isBinary.ps1

This script got every text or binary file I tried correct.

kervin
Hmmm... I should have checked for UTF-32 before UTF-8...
kervin
This is the same basic idea as calling IMultiLang2::DetectInputCodepage, except it supports far fewer encodings and won't reliably detect UTF-8. Per the Unicode standard, UTF-8 files are *not* supposed to be written with a BOM. Microsoft tools do it anyway -- which I appreciate, frankly -- but most others do not.
Richard Berg
Thanks for the heads up Richard. I will look into this UTF-8 issue. I noticed that grep also did a search for '\200', which seems to be at least part of UTF-8 'Overlong'. I probably need to search for that as well then.
kervin
\200 is a valid way to start a UTF-8 2-byte sequence. See example above. Doesn't answer the question of why grep does what it does, though.
Richard Berg
I looked into IMultiLanguage2::DetectInputCodepage on MSDN. That's a COM IE option. Prefer to skip it because of having to deal with IE and COM integration. Found an article on detecting UTF-8 without the BOM [ http://www.w3.org/International/questions/qa-forms-utf-8.en.php ]. Will implement this later.
kervin