This question has less to do with actual code, and more to do with the underlying methods.
My 'boss' at my pseudointernship has requested that I write him a script that will scrape a list of links from a users' tweet (the list comes 'round once per week, and it's always the same user) and then publish said list to the company's Tumblr account.
Currently, I am thinking about this structure: The base will be a bash script that first calls some script that uses the Twitter API to find the post given a hashtag and parse the list (current candidates for languages being Perl, PHP and Ruby, in no particular order). Then, the script will store the parsed list (with some markup) into a text file, from where another script that uses the Tumblr API will format the list and then post it.
Is this a sensible way to go about doing this? So far in planning I'm only up to getting the Twitter post, but I'm already stuck between using the API to grab the post or just grabbing the feed they provide and attempting to parse it. I know it's not really a big project, but it's certainly the largest one I've ever started, so I'm paralyzed with fear when it comes to making decisions!