views:

34

answers:

2

This is a tough one. I have a Response filter setup to transform the html before spitting back out to the browser (http://aspnetresources.com/articles/HttpFilters). This works fine on everyones machine but mine. Actually it was working on my machine until I had to do a hard reset because it locked up.

public override void Write(byte[] buffer, int offset, int count)
{
    string strBuffer =  System.Text.UTF8Encoding.UTF8.GetString(buffer, offset, count);

For everyone else (and mine previosly) strBuffer contains HTML. Now for whatever reason it's returning junk characters for me. Any ideas? I'm pulling my hair out!!

Update

Turns out that "Enable dynamic content compression" is causing the issue. For some reason it's getting gzipped before being passed into the filter.

Solution

Setting the "dynamicCompressionBeforeCache" to false in the web.config fixed the issue.

<urlCompression doStaticCompression="true" doDynamicCompression="true" dynamicCompressionBeforeCache="false" />
A: 

Sounds like something went wrong. I too have had some strange behaviour after a lockup. What worked for me was to delete the temp files in C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files

Jeroen
I tried that several times. That usually does the trick for cleaning up wierdness, but not this time.
Micah
A: 

You've specified these bytes: 31, 139, 8, 0, 0, 0, 0, 0, 4

That's not valid UTF-8. In particular, it would mean Unicode character U+0031 ("INFORMATION SEPARATOR ONE") followed by bytes 139 and 8... and 139 followed by 8 isn't a valid UTF-8 byte sequence. Even if those did form a valid sequence, you'd then have 5 Unicode U+0000 characters (NUL) followed by U+0004 (END OF TRANSMISSION). Hardly in valid HTML.

I don't know what you're actually filtering, but it isn't valid UTF-8 text. It doesn't look likely to be text at all, in fact. Is it possible that you're actually trying to apply a filter to binary data such as an image?

Note that you have another fundamental problem with your method of filtering: you're assuming that each buffer contains complete text. It's quite possible for a you to receive one buffer which contains the first half of a character and then a second buffer containing the remainder of it. That's what the System.Text.Decoder interface is for - it's stateful, remembering partial characters.

Jon Skeet
It turns out turning off "Enable dynamic content compression" seems to fix it. Why would the data be getting compressed before getting passed into my filter? Does compression happen somewhere further down the chain? Does Module declaration order matter?
Micah
@Micah: I've no idea, I'm afraid - but it being compressed would certainly explain why this isn't text data.
Jon Skeet