Be careful of conflating too many different things. You have a logical cost of the query based on number of rows to be examined, a (possibly) smaller logical cost based on number of rows actually returned and an unrelated a physical cost based on number of pages that have to be examined.
The three are related, but not strongly.
The number of rows examined is the largest of these costs and least easy to control. The rows have to be matched through the join algorithm. This, also, is the least relevant.
The number of rows returned is more costly because that's I/O bandwidth between client application and database.
The number of pages read is the most costly because that's an even larger number of physical I/O's. That's the most costly because that's load inside the database with impact on all clients.
SQL Query with one table is O( n ). That's the number of rows. It's also O( p ) based on the number of pages.
With more than one table, the rows examined is O(n*m*...). That's the nested-loops algorithm. Depending on the cardinality of the relationship, however, the result set may be as small as O( n ) because the relationships are all 1:1. But each table must be examined for matching rows.
A Hash Join replaces O( n*log(n) ) index + table reads with O( n ) direct hash lookups. You still have to process O( n ) rows, but you bypass some index reads.
A Merge Join replaces O( n*m ) nested loops with O( log(n+m)*(n+m) ) sort operation.
With indexes, the physical cost may be reduced to O(log(n)*m) if a table is merely checked for existence. If rows are required, then the index speeds access to the rows, but all matching rows must be processed. O(n*m) because that's the size of the result set, irrespective of indexes.
The pages examined for this work may be smaller, depending on the selectivity of the index.
The point of an index isn't to reduce the number of rows examined so much. It's to reduce the physical I/O cost of fetching the rows.