views:

210

answers:

1

Hi all. I'm working on an iPhone OS app whose primary view is a 2-D OpenGL view (this is a subclass of Apple's EAGLView class, basically setting up an ortho-projected 2D environment) that the user interacts with directly.

Sometimes (not at all times) I'd like to render some controls on top of this baseline GL view-- think like a Heads-Up Display. Note that the baseline view underneath may be scrolling/animating while controls should appear to be fixed on the screen above.

I'm good with Cocoa views in general, and I'm pretty good with CoreGraphics, but I'm green with Open GL, and the EAGLView's operations (and its relationship to CALayers) is fairly opaque to me. I'm not sure how to mix in other elements most effectively (read: best performance, least hassle, etc). I know that in a pinch, I can create and keep around geometry for all the other controls, and render those on top of my baseline geometry every time I paint/swap, and thus just keep everything the user sees on one single view. But I'm less certain about other techniques, such as having another view on top (UIKit/CG or GL?) or somehow creating other layers in my single view, etc.

If people would be so kind to write up some brief observations if they've travelled these roads before, or at least point me to documentation or existing discussion on this issue, I'd greatly appreciate it.

Thanks.

+1  A: 

Create your animated view as normal. Render it to a render target. What does this mean? Well, usually, when you 'draw' the polygons to the screen, you're actually doing it to a normal surface (the primary surface), that just so happens to be the one that eventually goes to the screen. Instead of rendering to the screen surface, you can render to any old surface.

Now, your HUD. Will this be exactly the same all the time or will it change? Will only bits of it change?

If all of it changes, you'll need to keep all the HUD geometry and textures in memory, and will have to render them onto your 'scrolling' surface as normal. You can them apply this final, composite render to the screen. I wouldn't worry too much about hassle and performance here -- the HUD can hardly be as complex as the background. You'll have a few textures quads at most?

If all of the hud is static, then you can render it to a separate surface when your app starts, then each frame render from that surface onto the animated surface you're drawing each frame. This way you can unload all the HUD geom and textures right at the start. Of course, it might be the case that the surface takes up more memory -- it depends on what resources your app needs most.

If your had half changes and half not, then technically, you can pre-render the static parts and then render the other parts as you're going along, but this is more hassle than the other two options.

Your two main options depend on the dynamicness of the HUD. If it moves, you will need to redraw it onto your scene every frame. It sucks, but I can hardly imagine that geometry is complex compared to the rest of it. If it's static, you can pre-render and just alpha blend one surface onto another before sending to the screen.

As I said, it all depends on what resources your app will have spare.

Pod
Thanks for your answer, Pod. I'm guessing what you're saying is "don't draw controls or anything with CoreGraphics on top of your GL view". Here is the followup: When I render onto another (not the screen) surface, how do I then blit that on top of the main surface?
quixoto
I haven't used OGL in a long time, but I an easy way (if not 'the' way), is just to draw a quad that's the size of the target surface, and use your pre-rendered stuff as a texture. This will also scale it for you as well, if they're different sizes. If they're 1:1 then it's basically an old-fashioned blit. Use alpha blending etc to get the desired results when drawing the HUD (so you don't make a white screen with only HUD, etc :))
Pod
Also, additionally, when doing a "blit" of one pre-rendered quad onto your screen, it might be easiest to use a 'passthrough' VS. I think OGL has a function to do this for you, but if not, just make a GLSL shader than simply passes xyzw and uvst through without transforming them, and use coordinates of (1,0,0,0),(0,0,0,0),(1,1,0,0) etc for the corners of your quad. You might need to play with the z in order to make it appear "infront" of the pixels already on that render target.
Pod