Is it possible to programmatically pull a single file from a decently sized .tar.gz without extracting the entire tarball to disk? Essentially I need to get inside large tar.gz files over the network and extract 1 small text file. It seems somewhat over-the-top to pull and extract the tarball to disk, then pull the file out, then delete everything else. Also I'm going to be doing this recursively (e.g. package dependencies, each text file points to more tar.gz's), so the less network traffic and cpu cycles I can get away with, the better.
From the man page, to extract blah.txt from foo.tar.gz:
tar -xzf foo.tar.gz blah.txt
(And this goes on superuser, of course, but hey, prompt answers are nice too.)
I echo Jefromi's answer, with the addition of including the path to the file if you have directories in the tar file (this may seem obvious to some, but it wasn't initially clear to me how to specify the directory structure).
For example, if you did the tar at the src/ directory, and blah.txt was under release1/shared/, you would go back to the src/ directory (if you want it untarred at the same place)
tar -xzf tar.gz release1/shared/blah.txt
If you don't remember the directory structure of your tar file (I'm a little disorganized and sometimes forget where I did the tar), you can always
tar -tzf tar.gz
to see the contents, canceling out (Ctrl+C) once you get an idea of your directory structure.