set the ObjectContext to MergeOption.NoTracking (since it is a read only operation). If you are using the same ObjectContext for saving other data, Detach the object from the context.
how to detach
foreach( IQueryable)
{
  //do something 
  objectContext.Detach(object);
}
Edit: If you are using NoTracking option, there is no need to detach 
Edit2: I wrote to Matt Warren about this scenario. And am posting relevant private correspondences here, with his approval
  The results from SQL server may not
  even be all produced by the server
  yet.  The query has started on the
  server and the first batch of results
  are transferred to the client, but no
  more are produced (or they are cached
  on the server) until the client
  requests to continue reading them. 
  This is what is called ‘firehose
  cursor’ mode, or sometimes referred to
  as streaming.  The server is sending
  them as fast as it can, and the client
  is reading them as fast as it can
  (your code), but there is a data
  transfer protocol underneath that
  requires acknowledgement from the
  client to continue sending more data.
Since IQueryable inherits from IEnumerable, I believe the underlying query sent to the server would be the same. However, when we do a IEnumerable.ToList(), the data reader, which is used by the underlying connection, would start populating the object, the objects get loaded into the app domain and might run out of memory these objects cannot yet be disposed.
When you are using foreach and IEunmerable the data reader reads the SQL result set one at a time, the objects are created and then disposed. The underlying connection might receive data in chunks and might not send a response to SQL Server back until all the chunks are read. Hence you will not run into 'out of memory` exception
Edit3:
When your query is running, you actually can open your SQL Server "Activity Monitor" and see the query, the Task State as SUSPENDED and Wait Type as Async_network_IO - which actually states that the result is in the SQL Server network buffer. You can read more about it here and here