It depends on your Reduce step. In a Hadoop-style implementation of MapReduce, your Reducer is getting called once per key, with all the rows relevant to that key.
So, for example, your Mapper might be taking in a lot of unordered web server logs, adding some metadata (e.g., geocoding), and emitting [key, record] pairs with a cookie ID as the key. Your Reducer would then be called once per cookie ID and would be fed all the data for that cookie, and could compute aggregate info such as visit frequency or average pages viewed per visit. Or you could key on geocode data, and gather aggregate stats based on geography.
Even if you're not doing per-key aggregate analysis - indeed, even if you're computing something over the whole set - it might be possible to break your computation into chunks, each of which could be fed to a Reducer.