This is a bit of a stretch, but I have an interesting (to me) programming (err... scripting? algorithmic? organizational?) problem. (I am tagging this in Ruby, because of my preference for Ruby for scripting.)
Imagine you have 100 gigabytes of pictures floating around on multiple drives. There are likely a total of 25 gigabytes of unique pictures. The rest are either duplicates (with the same filename), duplicates (with a different name), or smaller versions of the picture (exported for email). Of course, aside from these being on multiple drives, they also are in different folder structures. For instance, img_0123.jpg might exist (in the Windows world) as c:\users\username\pics\2008\img_0123.jpg, c:\pics\2008\img_0123.jpg, c:\pics\export\img_0123-email.jpg, and d:\pics\europe_2008\venice\bungy_jumping_off_st_marks.jpg.
Back in the day we used to have to put everything in folders, and rename them pretty little names (like above). Today, search and tagging takes care of all of this and is redundant (and makes it difficult to organize).
In the past, I have tried moving everything to one drive, written a ruby script to scan for duplicates (I don't trust those dupfinder programs - I ran one, and it started deleting everything!), and tried reorganizing them. However, after a few days, I have given up (on the organizing and manually deleting part).
I am about to embark on a new thought. First copy all the pictures from all of my drives onto a new drive, in ONE folder. Anything with duplicate file names will need to be manually checked. Then fire up Picasa, and manually scan the files and delete duplicates myself (using the good ol' noggen).
However, I am very dissatisfied that I couldn't easily solve this programmatically and am interested in hearing some other solutions, either programmatically or otherwise (maybe writing code isn't the best solution, gasp!).