Simply put, the longer a string is, and the more similar two strings are, the longer they will take to compare (consider a string 1000 characters long where the only difference is the last character, you can see how long it will take before routine discovers the discrepancy).
But, lets contrast that cost of comparing a long string to the cost of locating them on disk.
Indexes are stored in B+Trees, which are balanced trees with a variable number of the nodes, and where each node is linked to the other (a -> b -> c). This gives us two capabilities: quick look up by walking the tree, and then quick, tree order access to other nodes (once you find 'a', it's easy to find 'b', then 'c', etc.).
Indexes are laid out in disk pages, and in general the more nodes you can cram in to a index page, then the lower the overall height of the index B+tree is. The lower the tree height, the faster you will be able to find a specific row, since you will typically traverse the height of the tree (since it's balanced) to get to any one leaf node.
The lower the height, the fewer disk hits you have to make. If you have a tree that's 4 high, then to reach any random node requires loading 4 index pages in to RAM, and that's 4 disk hits. So, a 4 high tree is "twice as efficient" (for assorted values of "twice") as a 8 high tree.
Also, the more you can put in an index page, the fewer hits you'll need if you start iterating along the nodes. If your nodes hold 10 key values, loading a hundred rows will cost you 10 index page hits, whereas if it only holds 5 per node, you get twice the index disk hits.
Note, that you get a geometric progression in terms of number of records you need to add a new layer to the tree. (i.e. the difference between a 5 key node and a 10 key node is not twice the records.)
So, that's the value of have small keys -- lots of fan out in your index trees.
Mind, with a hash, you'd still have to do "where hash = and url='...'".
But it really comes down to your data access patterns, truthfully. How busy is the DB, what kind of queries you make, how much RAM you have to cache index pages, etc.
The index hit to locate your initial row may well not even be on the radar of your query times.
The key takeaway is that the number of records isn't important, but the fan out of the index tree is. For example, if you have a 1K index node, and a 4 byte index (long int), you can get 250 nodes per index (being very simplistic here), and a 3 layer tree can get, what, 16M rows all within a 3 deep tree -- any of 16M rows within 3 disk hits.