views:

4571

answers:

22

The one-liner should:

  • solve a real-world problem
  • not be extensively cryptic (should be easy to understand and reproduce)
  • be worth the time it takes to write it (should not be too clever)

I'm looking for practical tips and tricks (complementary examples for perldoc perlrun).

+8  A: 

The problem: A media player does not automatically load subtitles due to their names differ from corresponding video files.

Solution: Rename all *.srt (files with subtitles) to match the *.avi (files with video).

perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'

CAVEAT: Sorting order of original video and subtitle filenames should be the same.

Here, a more verbose version of the above one-liner:

my @avi = glob('*.avi');
my @srt = glob('*.srt');

for my $i (0..$#avi)
{
  my $video_filename = $avi[$i];
  $video_filename =~ s/avi$/srt/;   # 'movie1.avi' -> 'movie1.srt'

  my $subtitle_filename = $srt[$i]; # 'film1.srt'
  rename($subtitle_filename, $video_filename); # 'film1.srt' -> 'movie1.srt'
}
J.F. Sebastian
+12  A: 

Please see my slides for "A Field Guide To The Perl Command Line Options."

Andy Lester
`-e examples' slide: on Windows I prefer q() and qq() instead of quotes. It allows me to use the same one-liner of Linux by replacing just two outside quotes. Windows: perl -E"say q(Hello, World)". Linux: perl -E'say q(Hello, World)'
J.F. Sebastian
A: 

At some time I found that anything I would want to do with perl that is short enough to be done on the command line with 'perl -e' can be done better, easier and faster with normal ZSH features without the hassle of quoting. E.g. the example above could be done like this:

for foo in *.avi; mv *.srt ${foo:r}.srt

UPDATE

The onliner above is obiously wrong, sorry for not reading carefully. Here is the correct version:

srt=(*.srt); for foo in *.avi; mv $srt[1] ${foo:r}.srt && srt=($srt[2,-1])
jkramer
You misunderstand the perl. The <*.srt> is an iterator, returning one of the .srt files each time through the outer loop.
ysth
Sorry, my fault, I should really read more carefully. I'll fix that in the answer.
jkramer
glob-in-scalar-context is really really easy to get wrong; it should be avoided whereever possible.
ysth
Doesn't the new version get out of sync on an mv failure?
ysth
Well, the whole idea of this thing is somewhat instable, since it assumes that for every .avi there's a .srt and that both, when sorted alphabetically, have each avi/srt pair at the same position in the lists. However, you can replace the and put braces around it. ;)
jkramer
+9  A: 

Squid log files. They're great, aren't they? Except by default they have seconds-from-the-epoch as the time field. Here's a one-liner that reads from a squid log file and converts the time into a human readable date:

perl -pe's/([\d.]+)/localtime $1/e;' access.log

With a small tweak, you can make it only display lines with a keyword you're interested in. The following watches for stackoverflow.com accesses and prints only those lines, with a human readable date. To make it more useful, I'm giving it the output of tail -f, so I can see accesses in real time:

tail -f access.log | perl -ne's/([\d.]+)/localtime $1/e,print if /stackoverflow\.com/'

Cheerio,

Paul
pjf
+3  A: 

One of the biggest bandwidth hogs at $work is download web advertising, so I'm looking at the low-hanging fruit waiting to be picked. I've got rid of Google ads, now I have Microsoft in my line of sights. So I run a tail on the log file, and pick out the lines of interest:

tail -F /var/log/squid/access.log | \
perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll}
    && printf "%02d:%02d:%02d %15s %9d\n",
        sub{reverse @_[0..2]}->(localtime $F[0]), @F[2,4]'

What the Perl pipe does is to begin by setting autoflush to true, so that any that is acted upon is printed out immediately. Otherwise the output it chunked up and one receives a batch of lines when the output buffer fills. The -a switch splits each input line on white space, and saves the results in the array @F (functionality inspired by awk's capacity to split input records into its $1, $2, $3... variables).

It checks whether the 7th field in the line contains the URI we seek (using \Q to save us the pain of escaping uninteresting metacharacters). If a match is found, it pretty-prints the time, the source IP and the number of bytes returned from the remote site.

The time is obtained by taking the epoch time in the first field and using 'localtime' to break it down into its components (hour, minute, second, day, month, year). It takes a slice of the first three elements returns, second, minute and hour, and reverses the order to get hour, minute and second. This is returned as a three element array, along with a slice of the third (IP address) and fifth (size) from the original @F array. These five arguments are passed to sprintf which formats the results.

dland
+10  A: 

You may not think of this as Perl, but I use ack religiously (it's a smart grep replacement written in Perl) and that lets me edit, for example, all of my Perl tests which access a particular part of our API:

vim $(ack --perl -l 'api/v1/episode' t)

As a side note, if you use vim, you can run all of the tests in your editor's buffers.

For something with more obvious (if simple) Perl, I needed to know how many test programs used out test fixtures in the t/lib/TestPM directory (I've cut down the command for clarity).

ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => @ARGV') aggtests/ t -l

Note how the "join" turns the results into a regex to feed to ack.

Ovid
+2  A: 

In response to Ovids vim/ack combination:

I too am often searching for something and then want to open the matching files in Vim, so I made myself a little shortcut some time ago (works in ZSH only, I think):

function vimify-eval; {
    if [[ ! -z "$BUFFER" ]]; then
        if [[ $BUFFER = 'ack'* ]]; then
            BUFFER="$BUFFER -l"
        fi  
        BUFFER="vim  \$($BUFFER)"
        zle accept-line
    fi  
}

zle -N vim-eval-widget vimify-eval

bindkey '^P' vim-eval-widget

It works like this: I search for something using ack, like ack some-pattern. I look at the results and if I like it, I press arrow-up to get the ack-line again and then press CTRL+P. What happens then is that ZSH appends and "-l" for listing filenames only if the command starts with "ack". Then it puts "$(...)" around the command and "vim" in front of it. Then the whole thing is executed.

jkramer
A: 

Here is one that I find handy when dealing with a collection compressed log files:

   open STATFILE, "zcat $logFile|" or die "Can't open zcat of $logFile" ;
Kwondri
One-liner means an entire program on one line, not one useful line of a program.
ysth
I would actually separate this into 2 or more lines myself.
Brad Gilbert
+2  A: 

I use this quite frequently to quickly convert epoch times to a useful datestamp.

perl -l -e 'print scalar(localtime($ARGV[0]))'

Make an alias in your shell:

alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""

Then pipe an epoch number to the alias.

echo 1219174516 | e2d

Many programs and utilities on Unix/Linux use epoch values to represent time, so this has proved invaluable for me.

jtimberman
To get a readable datestamp from epoch seconds you can also use GNU date with the @ sign: date --date=@1219174516
Philip Durbin
+1  A: 

Filters a stream of white-space separated stanzas (name/value pair lists), sorting each stanza individually:

perl -00 -ne 'print sort split /^/'
mtk
`sort()` will put empty lines at paragraph's top. I guess you actually mean this: perl -00 -ne'($n, @a) = sort split /^/; print @a, $n' Both one-liners will fail if there is no newline after the last paragraph.
J.F. Sebastian
+1  A: 

Expand all tabs to spaces: perl -pe'1while+s/\t/" "x(8-pos()%8)/e'

Of course, this could be done with :set et, :ret in Vim.

ephemient
perl -pe'1while+s/\t/" "x(8-(pos)%8)/e' Parentheses are required.
J.F. Sebastian
+2  A: 

Remove duplicates in path variable:

set path=(`echo $path | perl -e 'foreach(split(/ /,<>)){print $_," " unless $s{$_}++;}'`)
dr_pepper
What is a separator between paths in $path? See my answer.
J.F. Sebastian
+3  A: 

The Perl one-liner I use the most is the Perl calculator

perl -ple '$_=eval'
If you're running Perl 5.10 you could run perl -plE '$_=eval', to enable 5.10 features.
Brad Gilbert
+6  A: 

"last" means "final", as in, I'd never write another again.

That's not gonna happen. I'm gonna keep writing Perl one-liners as long as the prompt accepts it.

Perhaps you meant "latest".

Randal Schwartz
Thank you. I've corrected the title. The time for my last meal has not come yet :)
J.F. Sebastian
+4  A: 

@dr_pepper

Remove literal duplicates in $PATH:

$ export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } @F'<<<$PATH)

Print unique clean paths from %PATH% environment variable (it doesn't touch ../ and alike, replace File::Spec->rel2abs by Cwd::realpath if it is desirable) It is not a one-liner to be more portable:

#!/usr/bin/perl -w
use File::Spec; 

$, = "\n"; 
print grep { !$count{$_}++ } 
      map  { File::Spec->rel2abs($_) } 
      File::Spec->path;
J.F. Sebastian
Thanks for showing me this, I was looking for a shorter one-liner to do this. In my environment, white space is the separator when using lowercase $path. Is it better to use upper case $PATH?
dr_pepper
In my shell (bash) $path and $PATH are different variables (names are case-sensitive: $ a=2; A=3; echo $(($a * $A)) This prints '6'.
J.F. Sebastian
Duplicates could be removed using a combination of programs `tr`, `sort`, `uniq`, `cut` and a pipe.
J.F. Sebastian
But, using tr, sort, etc changes the path order, which may cause undesirable side effects.
dr_pepper
In ZSH the variable `path` is bound the variable `PATH`, so `PATH` is always the elements of `path`, join with a colon, and `path` is always an array containing the chunks of `PATH` split by a column. To make them unique, just apply the -U modifier to one of the variables: typeset -U PATH
jkramer
+8  A: 

The common idiom of using find ... -exec rm {} \; to delete a set of files somewhere in a directory tree is not particularly efficient in that it executes the rm command once for each file found. One of my habits, born from the days when computers weren't quite as fast (dagnabbit!), is to replace many calls to rm with one call to perl:

find . -name '*.whatever' | perl -lne unlink

The perl part of the command line reads the list of files emitted* by find, one per line, trims the newline off, and deletes the file using perl's built-in unlink() function, which takes $_ as its argument if no explicit argument is supplied. ($_ is set to each line of input thanks to the -n flag.) (*These days, most find commands do -print by default, so I can leave that part out.)

I like this idiom not only because of the efficiency (possibly less important these days) but also because it has fewer chorded/awkward keys than typing the traditional -exec rm {} \; sequence. It also avoids quoting issues caused by file names with spaces, quotes, etc., of which I have many. (A more robust version might use find's -print0 option and then ask perl to read null-delimited records instead of lines, but I'm usually pretty confident that my file names do not contain embedded newlines.)

John Siracusa
I've been using xargs to solve that problem from a time before Perl was a glint in Larry's eye :-).
paxdiablo
+3  A: 

All one-liners from the answers collected in one place:

  • perl -pe's/([\d.]+)/localtime $1/e;' access.log

  • ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => @ARGV') aggtests/ t -l

  • perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'

  • find . -name '*.whatever' | perl -lne unlink

  • tail -F /var/log/squid/access.log | perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll} && printf "%02d:%02d:%02d %15s %9d\n", sub{reverse @_[0..2]}->(localtime $F[0]), @F[2,4]'

  • export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } @F'<<<$PATH)

  • alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""

  • perl -ple '$_=eval'

  • perl -00 -ne 'print sort split /^/'

  • perl -pe'1while+s/\t/" "x(8-pos()%8)/e'

  • tail -f log | perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print qq ($. lines in last $d secs, rate ),$./$d,qq(\n); $. =0; $s=$n; }'

See corresponding answers for their descriptions.

J.F. Sebastian
+2  A: 

Remove MS-DOS line-endings.

perl -p -i -e 's/\r\n$/\n/' htdocs/*.asp
JDrago
1. `-i` requires a suffix e.g., `-i.bak`. 2. It won't work on Windows.
J.F. Sebastian
I was wondering how to do Perl pie in Windows. Thanks for the tip.
JDrago
+1  A: 

One of the most recent one-liners that got a place in my ~/bin:

perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print "$. lines in last $d secs, rate ",$./$d,"\n"; $. =0; $s=$n; }'

You would use it against a tail of a log file and it will print the rate of lines being outputed.

Want to know how many hits per second you are getting on your webservers? tail -f log | this_script.

melo
It is a mini pipe viewer (`pv`) <http://www.ivarch.com/programs/pv.shtml>. But your one-liner works on Windows.
J.F. Sebastian
+1  A: 

Extracting Stack Overflow reputation without having to open a web page:

perl -nle "print '  Stack Overflow        ' . $1 . '  (no change)' if /\s{20,99}([0-9,]{3,6})<\/div>/;" "SO.html"  >> SOscores.txt

This assumes the user page has already been downloaded to file SO.html. I use wget for this purpose. The notation here is for Windows command line; it would be slighly different for Linux or Mac OS X. The output is appended to a text file.

I use it in a BAT script to automate sampling of reputation on the four sites in the family: Stack Overflow, Server Fault, Super User and Meta Stack Overflow.

Peter Mortensen
+1  A: 

Get human-readable output from du, sorted by size:

perl -e '%h=map{/.\s/;7x(ord$&&10)+$`,$_}`du -h`;print@h{sort%h}'
Adam Bellaire
+1  A: 

I have a list of tags with which I identify portions of text. The master list is of the format:

text description {tag_label}

It's important that the {tag_label} are not duplicated. So there's this nice simple script:

perl -ne '($c) = $_ =~ /({.*?})/; print $c,"\n" ' $1 | sort  | uniq -c | sort -d

I know that I could do the whole lot in shell or perl, but this was the first thing that came to mind.

singingfish
`perl -ne'$f{$1}++ while /({.*?})/g; END{ print "$f{$_} $_\n" for (sort {$f{$a} <=> $f{$b}} keys %f) }' $1`. You're right that for such tasks the first thing in mind is good enough. btw, are you sure that there could be only one tag per line?
J.F. Sebastian
To sort tags with the same frequencies: `perl -ne'$f{$1}++ while /({.*?})/g; END{ print "$f{$_} $_\n" for (sort { $f{$a} <=> $f{$b} or $a cmp $b } keys %f) }' $1``
J.F. Sebastian