views:

415

answers:

4

How do I use gl.gluUnproject in my OpenGL ES 1.1 android app to determine what is selected when the user touches the screen?

My understanding is that the touch event results in a line and I have to find the first "thing" it intersects with.

Are there any tutorials on how to do this?

A: 

It completely depends on what it is that you're rendering. You probably shouldn't use the picking approach because

a) It sounds like it will be slow, especially on android. b) You might want to have a larger 'touch area' for small objects so you don't have to touch precisely where they are. c) It doesn't really answer the right question - it gives you the result "What is the top-most graphical item that has been rendered exactly where I touched?" whereas you want to know "What game entity did the user touch or touch near?"

As I said, it completely depends on your game. If it is a 2D or nearly 2D game then it's simple and you just feed the touch coordinates into your game model. For 3D games I would suggest this simple algorithm that I just came up with on the spot:

  1. Make a list of all the touchable game objects that might have been touched.
  2. Transform their centre coordinates into 3D screen coordinates. (2D screen coordinates described here: http://www.flipcode.com/archives/Plotting_A_3D_Point_On_A_2D_Screen.shtml )
  3. Find the 2D distance to each object, discard objects with a distance greater your touch threshold.
  4. Find the closest object according to some distance metric. You'll have to experiment at this point and again, it depends on the nature of the game.

For exact results the line intersection thing could be used. There are algorithms for that (search for 'plane line intersection raycasting').

Timmmm
You're thereby assuming that the shape of all objects is the same and that their distance in 3D to the camera is the same. I don't think these are valid assumptions in most cases.
mnemosyn
+1  A: 

If you are doing 2D to 3D picking, you need to fiddle with matrices and vectors a bit. GlUnproject does not exist for OpenGL ES 1.1 so you have to do some math by yourself.

Ray-object intersection is a way to go then. Timmmms answer already covers some of it, but there's more. Idea is to create ray to 3D out of 2D coordinates. Inverse of view matrix and projection matrix are needed for that. Once you have ray, you can use ray-intersection test of your choice and of course you need to select closest object like at Timmmm's point 4. Bounding spheres and bounding boxes are easy to implement and internet is full of intersection test tutorials for them.

This picking tutorial is for DirectX, but you might get the idea. Ray-constructing part is most important.

Edit Android implements it's own version of gluUnproject. It can be used to create ray, by calling it for near and far plane (0 and 1) and subtracting near plane results from far plane results to get ray's direction. Ray origin is view location. More here.

Virne
I was under the impression it did support gluUnProject():http://developer.android.com/reference/android/opengl/GLU.html
Omega
Oh. Indeed Android seems to implement it's own version then. It's not part of OpenGL ES 1.1 anyway. You can use that function to create ray by calling it twice: http://www.opengl.org/resources/faq/technical/selection.htm
Virne
Any suggestions on how I calculate what that ray is intersecting with?
Omega
http://www.realtimerendering.com/intersections.html contains links to tutorials. I suggest that you start with ray-sphere intersections. It's easiest and may be sufficient. What ever you chooce, you need to have access to vertices of objects and also their transformation matrices to calculate bounding sphere in world space. Perhaps Android has such functions already. (I've never used Android so I don't know.)
Virne
A: 

I think for most applications you should go for the correct 3D-approach: Ray casting.

Take the location in 2D screen coordinates selected by the user and project them into your world space. This gives a 3D ray that originates at the camera and points into the scene. Now you need to perform collision testing in 3D. In most cases, this can be accomplished by reducing the objects to a set of simple geometries such as ellipsoids, spheres and boxes for speedup.

Given the precision of handheld devices, that should be sufficient precision already. Note that depending on the form of the object, you might need to use more than one basic primitive. Also, it is pointless to always use the same primitive: A bounding sphere is a very bad approximation for a very long rod, obviously.

mnemosyn
Do you know of any tutorials that explain doing these tests?
Omega
Not really, unfortunately. There are bits and pieces here and there, but I couldn't find one concise implementation yet. How soon do you need it? Perhaps I could write something up....
mnemosyn
I'll never say no to a new tutorial being made! Should you end up making one, I'd love to see it done in the context of Java+Android!
Omega
Well, you are invited to translate it. I don't even have Java _installed_. The problem is that there is a lot of code involved: Ray, Vector, Matrix, Matrix inversion, etc. I'll see what I can do...
mnemosyn
A: 

One thought I had was to cache the results of calling gluProject. This might only be practical when you are dealing with a fixed camera. At the end of any change in camera perspective, I can regenerate this cache of "touchables".

This also has the added benefit of ensuring that I'm only testing input against things that will respond to being touched.

I'd appreciate any thoughts on this approach!

Omega