views:

52

answers:

1

I'm running memcached with the following bash command pattern:

memcached -vv 2>&1 | tee memkeywatch2010098.log 2>&1 | ~/bin/memtracer.py | tee memkeywatchCounts20100908.log

to try and track down unmatched gets to sets for keys platform wide.

The memtracer script is below and works as desired, with one minor issue. Watching the intermediate log file size, memtracer.py doesn't start getting input until memkeywatchYMD.log is about 15-18K in size. Is there a better way to read in stdin or perhaps a way to cut the buffer size down to under 1k for faster response times?

#!/usr/bin/python

import sys
from collections import defaultdict

if __name__ == "__main__":


    keys = defaultdict(int)
    GET = 1
    SET = 2
    CLIENT = 1
    SERVER = 2

    #if <
    for line in sys.stdin:
        key = None
        components = line.strip().split(" ")
        #newConn = components[0][1:3]
        direction = CLIENT if components[0].startswith("<") else SERVER

        #if lastConn != newConn:        
        #    lastConn = newConn

        if direction == CLIENT:            
            command = SET if components[1] == "set" else GET
            key = components[2]
            if command == SET:                
                keys[key] -= 1                                                                                    
        elif direction == SERVER:
            command = components[1]
            if command == "sending":
                key = components[3] 
                keys[key] += 1

        if key != None:
            print "%s:%s" % ( key, keys[key], )
+3  A: 

You can completely remove buffering from stdin/stdout by using python's -u flag:

-u     : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)
         see man page for details on internal buffering relating to '-u'

and the man page clarifies:

   -u     Force stdin, stdout and stderr to  be  totally  unbuffered.   On
          systems  where  it matters, also put stdin, stdout and stderr in
          binary mode.  Note that there is internal  buffering  in  xread-
          lines(),  readlines()  and  file-object  iterators ("for line in
          sys.stdin") which is not influenced by  this  option.   To  work
          around  this, you will want to use "sys.stdin.readline()" inside
          a "while 1:" loop.

Beyond this, altering the buffering for an existing file is not supported, but you can make a new file object with the same underlying file descriptor as an existing one, and possibly different buffering, using os.fdopen. I.e.,

import os
import sys
newin = os.fdopen(sys.stdin.fileno(), 'r', 100)

should bind newin to the name of a file object that reads the same FD as standard input, but buffered by only about 100 bytes at a time (and you could continue with sys.stdin = newin to use the new file object as standard input from there onwards). I say "should" because this area used to have a number of bugs and issues on some platforms (it's pretty hard functionality to provide cross-platform with full generality) -- I'm not sure what its state is now, but I'd definitely recommend thorough testing on all platforms of interest to ensure that everything goes smoothly. (-u, removing buffering entirely, should work with fewer problems across all platforms, if that might meet your requirements).

Alex Martelli
@Alex thanks, the -u flag for a linux environment was the winner. I had previously tried using os.fdopen and ran into the same buffering issue, even if I set the buffer size to 10.
David