tags:

views:

49

answers:

3

My iphone app connects to my php web service to retrieve data from MySql database? a request can return 500 results; So what is the best way to implement paging and retrieve data 20 items at a time?

  • Let's say i receive the first 20 ads from my database. Now how can i request for the next 20 ads in my database?
+5  A: 

The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).

With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):

SELECT * FROM tbl LIMIT 5,10;  # Retrieve rows 6-15

To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:

SELECT * FROM tbl LIMIT 95,18446744073709551615;

With one argument, the value specifies the number of rows to return from the beginning of the result set:

SELECT * FROM tbl LIMIT 5;     # Retrieve first 5 rows

In other words, LIMIT row_count is equivalent to LIMIT 0, row_count.

Faisal Feroz
When using LIMIT for paging you should also specify an ORDER BY.
Mark Byers
-1, because it is a word-for-word copypaste from http://dev.mysql.com/doc/refman/5.1/en/select.html
shylent
@shylent: Nothing wrong with quoting the documentation, but I agree that he should have mentioned that he was copying the docs and provided a link to the original source. Also I'm surprised that the documentation would include examples of using LIMIT without an ORDER BY... that seems like a bad practice to be encouraging. Without an ORDER BY there's no guarantee that the order will be the same between calls.
Mark Byers
anyway, when paginating large resultsets (and that's what pagination is for - break up large resultsets into smaller chunks, right?), you should keep in mind that if you do a `limit X, Y`, what essentially happens is that X+Y rows are retrieved and then X rows from the beginning are dropped and whatever left is returned. To reiterate: `limit X, Y` results in scan of X+Y rows.
shylent
I don't like your LIMIT 95, 18446744073709551615 idea.. take a look at `OFFSET` ;-)
CharlesLeaf
A: 

you can also do

SELECT SQL_CALC_FOUND_ROWS * FROM tbl limit 0, 20

The row count of the select statement (without the limit) is captured in the same select statement so that you don't need to query the the table size again. You get the row count using SELECT FOUND_ROWS();

surajz
+1  A: 

For 500 records efficiency is probably not an issue, but if you have millions of records then it can be advantageous to use a WHERE clause to select the next page:

SELECT *
FROM yourtable
WHERE id > 234374
ORDER BY id
LIMIT 20

The "234374" here is the id of the last record from the prevous page you viewed.

This will enable an index on id to be used to find the first record. If you use LIMIT offset, 20 you could find that it gets slower and slower as you page towards the end. As I said, it probably won't matter if you have only 200 records, but it can make a difference with larger result sets.

Another advantage of this approach is that if the data changes between the calls you won't miss records or get a repeated record. This is because adding or removing a row means that the offset of all the rows after it changes. In your case it's probably not important - I guess your pool of adverts doesn't change too often and anyway no-one would notice if they get the same ad twice in a row - but if you're looking for the "best way" then this is another thing to keep in mind when choosing which approach to use.

If you do wish to use LIMIT with an offset (and this is necessary if a user navigates directly to page 10000 instead of paging through pages one by one) then you could read this article about late row lookups to improve performance of LIMIT with a large offset.

Mark Byers
This is more like it :P While I absolutely disapprove of the implication, that 'newer' ids are always larger, than 'older' ones, *most of the time* this will indeed be the case and so, I think, this is 'good enough'. Anyway, yes, as you demonstrated, proper pagination (without severe performance degradation on large resultsets) is not particularly trivial and writing `limit 1000000, 10` and hoping that it will work won't get you anywhere.
shylent