In general you need to automate site access and parsing, aka scraping. There are usually two tricky areas to watch out for: 1) authentication 2) whatever you're scraping will typically require you to inspect its HTML closely while you determine what you're trying to accomplish.
I wrote a simple ruby app which scrapes and searches Apple's refurbished store a while back that you can check out here as an example (keep in mind it could certainly use improvement, but may get you going):
http://grapple.xorcyst.com
I've written similar stuff to grab data from my bank accounts (I'm not too keen on giving mint my credentials) using mechanize and hpricot, as well as job sites, used car dealerships etc, so it's flexible
if you want to put in the effort.
It's a useful thing to do, but you need to be careful not to violate any use policies and the like.
Here's another quick example that grabs job postings to show you how simple it can be
#!/usr/bin/ruby
require 'rubygems'
require 'mechanize'
require 'hpricot'
require 'open-uri'
url = "http://tbe.taleo.net/NA2/ats/careers/jobSearch.jsp?org=DIGITALGLOBE&cws=1"
site = WWW::Mechanize.new { |agent| agent.user_agent_alias = 'Mac Safari' }
page = site.get(url)
search_form = page.form("TBE_theForm")
search_form.org = "DIGITALGLOBE"
search_form.cws = "1"
search_form.act = "search"
search_form.WebPage = "JSRCH"
search_form.WebVersion = "0"
search_form.add_field!('location','1') #5
search_form.add_field!('updatedWithin','2')
search_results = site.submit(search_form)
doc = Hpricot(search_results.body)
puts "<b>DigitalGlobe (Longmont)</b>"
doc.search("//a").each do |a|
if a.to_s.rindex('rid=') != nil
puts a.to_s.gsub('"','')
end
end