views:

1074

answers:

18

What is the most important module you pull off of CPAN to add to a stock Perl install?

One module only per response, please.

Please explain why you think the module is best-of-breed

+5  A: 

For Web applications, I like:

CGI::Application

It makes it very easy to do MVC in Perl, and build large web applications. Pair this with Template::Toolkit and Class::DBI, and you are all set!

pkaeding
+16  A: 

DBI is pretty useful too.

It's "the best" because it provides an interface for all your Database connection needs. DBI is so good that I've actually never connected to a database using Perl without it. For a language whose motto is "There's More than One Way to Do it" that's pretty damn impressive.

David McLaughlin
_DBI is so good that I've actually never connected to a database using Perl without it._Hardly anybody has, in the last 5 years or so. I've never used another route in 10 years. :)
skiphoppy
+7  A: 

Also, for anything web related (it can be used for many other things too):

Template::Toolkit

This seems to be the most expressive templating system Perl offers, at least in my experiences. I find HTML::Template and its friends to be too restrictive. On the other hand, you could make the argument that Template::Toolkit gives you more than enough rope to hang yourself, increasing the risk of putting too much application logic in the view realm.

pkaeding
+2  A: 

It always depends on what you do. If you want to connect to a database, DBI is essential. For web stuff HTML::Entities can be very useful. For client side applications it's LWP::UserAgent or LWP::Simple.

There's no "one most important" module, just like there's not the "one best car".

moritz
+1  A: 

I constantly find myself installing Date::Manip - it's slow as hell and I always question my judgement, but I seem to always need to handle arbitrary date formats (ISO standards, whatever the hell sucktastic thing that Oracle spits out, seconds since the epoch), and I find that allowing user to type "last thursday" can be a great usability tweak...

Tim Howland
+5  A: 

For database work, I would not work directly with DBI any more. Instead I always use DBIx::Simple. This is not an ORM or any other kind of abstraction layer, rather it simply shuffles the API structure around to produce more readable client code:

my $rs = $db->query( 'SELECT foo FROM bar' );
while ( my $row = $rs->hash ) { ... }

This isn’t very different from what DBI provides, but it does a very thorough job of cleaning up the unreadable mess that is the collection of (fetch|select)(row|all)_(array|hash)(ref)? methods. (Whose return values are sometimes further modulated by passing in an array- or hash-ref as a Slice attribute! Ugh.)

Aristotle Pagaltzis
+3  A: 

If you want to write SQL in your programs, and you aren’t interested in ORMs or other large abstractions, do investigate SQL::Interp. This module takes a bunch of SQL fragments intermixed with Perl data structures and renders the whole to an SQL statement with placeholders plus a list of bind values. This lets you write SQL queries very naturally while still using placeholders rather than interpolating values into the SQL statement directly, and it also provides shortcut notation based on compound data structures:

sql_interp( 'INSERT INTO foo', { bar => 1, baz => 2 } );
# 'INSERT INTO foo ('bar', 'baz') VALUES(?, ?)',  1,  2

Note that unlike SQL::Abstract (which I consider a failed experiment), it doesn't try to map all of SQL onto Perl data structures – just the few aspects of it where such a mapping makes sense.

Aristotle Pagaltzis
+7  A: 

List::MoreUtils

It's incredibly useful to be able to use any, or all, or more esoteric things like part. Mostly because if you don't use this, you end up writing your own tiny subroutines to accomplish the same thing everywhere, and potentially end up doing it wrong.

Adam Bellaire
A: 

For tie objects to the database, I like

DBI::Class

It works kind of like Rails's ActiveRecord, where you get an object, like any other Perl object, and you can access its fields through getters and setters. Its fields are then persisted in the DB automatically (Unless you turn off auto commit, and then call commit yourself).

I find this works very well for the models in my MVC web applications.

pkaeding
There's no such module.Do you mean Class::DBI (this is considered deprecated), DBIx::Class (well regarded by many people) or Oryx::DBI::Class (which I know nothing about).
dland
+8  A: 

Moose

An elegant & powerful OOP for Perl 5 moose.perl.org

I think this module represents a paradigm shift for Perl.

/I3az/

draegtun
+7  A: 

A very useful module I don't see mentioned very often is HTML::TreeBuilder. It parses nasty tag soup HTML and gives back a DOM-like structure, which supports XPATH-like commands to traverse the tree. It's great for scraping and manipulating HTML, when you aren't sure whether the input will be valid XHTML.

Keeth
I'm pretty enamored of HTML::TreeBuilder (and its XML cousin) myself.
skiphoppy
+2  A: 

The best non-core Perl module in my opinion is LWP.

I have not found a more easy-to-use, yet powerful and comprehensive web user agent in any language.

Keeth
+2  A: 

Spreadsheet::WriteExcel

+2  A: 

I seem to remember a similar question shortly after SO launched; you may want to search for that one and see what people answered then. Ah, here it is.

To recap from my answer there, these are the the first few modules I always install when setting up a new host:

Randy

rjray
+1  A: 

Someone mentioned HTML::TreeBuilder for scraping. Here’s a hot tip that uses TreeBuilder under the covers: Web::Scraper. What it does is it lets you define scraper objects that consist of hash key names, XPath expressions that locate elements of interest, and code to extract bits of data from them. Scraper objects take a URL and return a hash of the extracted data. The extractor code for each key can itself be another scraper object, if necessary, so that you can define how to scrape repeated compound page elements: provide the XPath to find the compound element in an outer scraper, then provide a bunch more XPaths to pull out its individual bits in an inner scraper. The result is a nested data structure.

In short, you can very elegantly suck data from all over a page into a Perl data structure. In doing so, the full power of XPath + Perl is available for use against any page no matter how nasty a tagsoup it is. The resulting scraper scripts are much easier to maintain and far more tolerant of minor markup variations than regex-based scrapers.

Bad news: as yet, its documentation is almost non-existent, so you have to get by with googling for something like [miyagawa web::scraper] to find example scripts posted by the module’s author.

Aristotle Pagaltzis
+3  A: 

I find the "Expect" package very useful for automating tasks that span over multiple servers. It enables you to telnet, perform commands, catpture the results and then take action based on those results.

Great for setting up monitoring scripts.

Ben
Ooh, yes. I loved Expect.pm! I used to do a lot of heavy tcl/expect programming, and just when Expect.pm was getting to the point where it could take over my job changed. :( But it used to be on my short "must-install" module list.
skiphoppy
+4  A: 

The Phalanx 100 is a list of the 100 most useful Perl modules, based on a number of criteria, such as general use, use as building blocks for other modules, responsiveness of the author(s) and so on.

It's a good idea to study this list and read the documentation of each module. While you may not encounter the need to use them immediately, it's good to have them at the back of your mind. You'll save a lot of time by reaching for one of these modules when the time comes, rather than reinventing the wheel.

dland
+1  A: 

As Aristotle commented, Web::Scraper is just incredible. To get started just check these slides.

rgmarcha