views:

329

answers:

5

I am working on a page with several dynamically generated tables where the original developers used approximately the following logic:

<table>
  <tr>...</tr>
  <asp:PlaceHolder ID="ph1" runat="server" />
</table>

In the code behind:

// Do lots of stuff
HtmlTableCell cell = new HtmlTableCell();
// set cell text, formatting, etc
ph1.Controls.Add(cell);
// repeat the above logic for every cell in the table

In one page, methods like this are used to add up to 10-15 total rows, with 10-15 columns each spread over 4-5 tables. The table structures are set, so this could also be constructed by hard-coding the table structure in html and putting literals and labels in each cell. The page itself has some performance issues. This is one of the factors that could be contributing to the bad performance.

The question is:

  • Is dynamically creating this many table cells and building the table in the code behind versus hard-coding the table and using Literals/Labels for populating the data something that could be introducing big performance issues (ie: is it worth switching over)?
  • Or is it more accurate to say that in a situation like this, it would not be so significant (or at least not significant enough to warrant the hours it would take to rewrite the dynamic table generation into hard coded table + Literals)?
+4  A: 

It's not ideal, but I suspect that the real issue is the datasource that feeds that logic. Speed up whatever query it uses to get the data.

Joel Coehoorn
A: 

This code would be signficantly enhanced in performance by using literals (in either direction) and with the volumes you mention I doubt you could even measure the difference.

AnthonyWJones
So if it is significantly enhanced in performance, then why would I not be able to measure the difference? I normally understand "significantly enhanced" as something that I will notice...
Yaakov Ellis
+4  A: 
  1. If they are they must be so marginal that you wouldnt notice. Ive coded a gridviewish control in this way with 10x50 cells and had no noticeable performance issues compared to static tables in similar size.

  2. What you should strive for in most cases is maintainability :)

CodeSpeaker
A: 

Yaakov, I'd worry more about what happens on the client side with such rendered code. I mean... it depends on HOW do you setup cell rendering (formatting, CSS, classes/ids) for each individual cell.

If you end up with large class names / ids and inline CSS!!!, then you should look into manually generating the table structure (without even using literals). It looks to me that all of those tables contents are just text so you could render it naturally without any kind of control overhead.

+2  A: 

Short answer: highly unlikely.

Long answer: It really depends on your definition of "significantly". Assuming you are generating a typical HTML page on a decent server machine with a fast connection, the breakdown would go something like this:

  1. load data from data source: 50-300ms
  2. generate 1125 table cells (15 x 15 x 5): 50-100ms
  3. assemble HTML document: 10-20ms
  4. send HTML page over internet: 150-1000ms
  5. render HTML page in browser: 50-500ms

The numbers are obviously estimates -- the important point is the cost of step (2) in relationship to all the others. Spending time optimizing step (2) when it constitutes a small share of the total cost is not a good use of developer time. As CodeSpeaker said, clean and maintainable code is a far better investment. If the total time to load the page is really an issue, attack the dominant costs first (steps 1 and 4) before moving on to the rest of the pipeline.

johnvey