I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.