As is posted every day on SO: You can't process HTML with regular expressions. http://stackoverflow.com/questions/701166
That goes double for a tool as limited as sed, with its Basic Regular Expressions.
If the kind of input you have is very limited such that every link is in the exact same format, it might be possible, in which case you'd have to post an example of that format. But for general HTML pages, it can't be done.
ETA given your example: at the simplest level, since each URL is already on its own line, you could select the ones that look right and throw away the bits you don't want:
#!/bin/sed -f
s/^<td><a href="\(.*\)">.*<\/a><\/td>$/\1/p
d
However note that this would still leave URLs in their HTML-encoded form. If the script that produced this file is correctly HTML-encoding its URLs, you would then have to replace any instances of the lt/gt/quot/amp entity references back to their plain character form ‘<>"&’. In practice the only one of those you're likely to meet is &/amp, which is very common indeed in URLs.
But! That's not all the HTML-encoding that might have occurred. Maybe there are other HTML entity references in there, like eacute (which would be valid now we have IRIs), or numerical character references (in both decimal and hex). There are two million-odd potential forms of encoding for characters including Unicode... replacing each one individually in sed would be a massive exercise in tedium.
Whilst you could possibly get away with it if you know that the generator script will never output any of those, an HTML parser is still best really. (Or, if you know it's well-formed XHTML, you can use a simpler XML parser, which tends to be built in to modern languages' standard libraries.)