tags:

views:

835

answers:

9
+3  A: 

the angle between the robots is arctan(x-distance, y-distance) (most platforms provide this 2-argument arctan that does the angle adjustment for you. You then just have to check whether this angle is less than some number away from the current heading.

EDIT: sorry for the vague answer, but here's some of my notes on your update:

  • coordinates would be in cartesian system, not window system. This leads to some different numbers

    Orientation: 3.3 <-- this is incorrect, bot1 looks like it's oriented about 4 or something. 191 degrees would be about a 8:30 position on the clock, which is almost directly pointed at tank 2. No wonder the system returns "Visible"!

    azimuth: cy would be -31 (31 units under), rather than 31.

With these changes, you should get correct results.

Jimmy
You still have to figure out the angle from the heading vector to know if it still lies within the FOV, and until this step is laid out, this answer is only half complete.
Boon
This answer got Auto-Accepted because of the Bounty Time-Limit
Andreas Grech
+1  A: 

Calculate the relative angle and distance of each robot relative to the current one. If the angle is within some threshold of the current heading and within the max view range, then it can see it.

The only tricky thing will be handling the boundary case where the angle goes from 2pi radians to 0.

geofftnz
A: 

Looking at both of your questions I'm thinking you can solve this problem using the math provided, you then have to solve many other issues around collision detection, firing bullets etc. These are non trivial to solve, especially if your bots aren't square. I'd recommend looking at physics engines - farseer on codeplex is a good WPF example, but this makes it into a project way bigger than a high school dev task.

Best advice I got for high marks, do something simple really well, don't part deliver something brilliant.

MrTelly
This is good advice. Collision detection using circles is easy and accurate enough for a first cut.
geofftnz
A: 

Does your turret really have that wide of a firing pattern? The path a bullet takes would be a straight line and it would not get bigger as it travels. You should have a simple vector in the direction of the the turret representing the turrets kill zone. Each tank would have a bounding circle representing their vulnerable area. Then you can proceed the way they do with ray tracing. A simple ray / circle intersection. Look at section 3 of the document Intersection of Linear and Circular Components in 2D.

John Ellinwood
That wide triangle you see in the image is the viewpoint of the tank. As in, if there is a bot in that viewpoint, the bot that is looking should be notified. But i don't know how to check if there is a bot in that particular range
Andreas Grech
its called a view frustum. standard graphics algorithm for determining what elements to draw in a 3d scene. you want to know if an object is in a view frustum from a particular eye perspective. i'll look for references.
John Ellinwood
+1  A: 

Something like this within your bot's class (C# code):

/// <summary>
/// Check to see if another bot is visible from this bot's point of view.
/// </summary>
/// <param name="other">The other bot to look for.</param>
/// <returns>True iff <paramref name="other"/> is visible for this bot with the current turret angle.</returns>
private bool Sees(Bot other)
{
    // Get the actual angle of the tangent between the bots.
    var actualAngle = Math.Atan2(this.X - other.X, this.Y - other.Y) * 180/Math.PI + 360;

    // Compare that angle to a the turret angle +/- the field of vision.
    var minVisibleAngle = (actualAngle - (FOV_ANGLE / 2) + 360);
    var maxVisibleAngle = (actualAngle + (FOV_ANGLE / 2) + 360); 
    if (this.TurretAngle >= minVisibleAngle && this.TurretAngle <= maxVisibleAngle)
    {
        return true;
    }
    return false;
}

Notes:

  • The +360's are there to force any negative angles to their corresponding positive values and to shift the boundary case of angle 0 to somewhere easier to range test.
  • This might be doable using only radian angles but I think they're dirty and hard to read :/
  • See the Math.Atan2 documentation for more details.
  • I highly recommend looking into the XNA Framework, as it's created with game design in mind. However, it doesn't use WPF.

This assumes that:

  • there are no obstacles to obstruct the view
  • Bot class has X and Y properties
  • The X and Y properties are at the center of the bot.
  • Bot class a TurretAngle property which denotes the turret's positive angle relative to the x-axis, counterclockwise.
  • Bot class has a static const angle called FOV_ANGLE denoting the turret's field of vision.

Disclaimer: This is not tested or even checked to compile, adapt it as necessary.

Ben S
The minVisibleAngle (and max) are always calculating to approx. more than 600, yet the TurretAngle is >= 0 and < 360, so the condition that the TurretAngle is greater than the minVisibleAngle can never return true. What am I doing wrong ?
Andreas Grech
Btw, as regards FOV_ANGLE I'm hardcoding a 40 atm. Is that too much, too little, or is it good ?
Andreas Grech
Field of vision in typical first person shooters are 45-90 degrees. The human field of vision is approx 200 degrees to give you a frame of reference.
Ben S
So in my context, what do you think is a good FOV_ANGLE ?
Andreas Grech
A 35mm camera has a 39.6 degree angle of view (http://en.wikipedia.org/wiki/Angle_of_view) so I think 40 is perfect.
Ben S
Add 360 to the turret's angle (in the Sees function, not overall) to compare everything in the right range. I added 360 to avoid having to test the boundary case of angles near 0
Ben S
+1  A: 

A couple of suggestions after implementing something similar (a long time ago!):

The following assumes that you are looping through all bots on the battlefield (not a particularly nice practice, but quick and easy to get something working!)

1) Its a lot easier to check if a bot is in range then if it can currently be seen within the FOV e.g.

int range = Math.sqrt( Math.abs(my.Location.X - bots.Location.X)^2 + 
            Math.abs(my.Location.Y - bots.Location.Y)^2 );

if (range < maxRange)
{
    // check for FOV
}

This ensures that it can potentially short-cuircuit a lot of FOV checking and speed up the process of running the simulation. As a caveat, you could have some randomness here to make it more interesting, such that after a certain distance the chance to see is linearly proportional to the range of the bot.

2) This article seems to have the FOV calculation stuff on it.

3) As an AI graduate ... nave you tried Neural Networks, you could train them to recognise whether or not a robot is in range and a valid target. This would negate any horribly complex and convoluted maths! You could have a multi layer perceptron [1], [2] feed in the bots co-ordinates and the targets cordinates and recieve a nice fire/no-fire decision at the end. WARNING: I feel obliged to tell you that this methodology is not the easiest to achieve and can be horribly frustrating when it goes wrong. Due to the (simle) non-deterministic nature of this form of algorithm, debugging can be a pain. Plus you will need some form of learning either Back Propogation (with training cases) or a Genetic Algorithm (another complex process to perfect)! Given the choice I would use Number 3, but its no for everyone!

TK
+1  A: 

This will tell you if the center of canvas2 can be hit by canvas1. If you want to account for the width of canvas2 it gets a little more complicated. In a nutshell, you would have to do two checks, one for each of the relevant corners of canvas2, instead of one check on the center.

/// assumming canvas1 is firing on canvas2

// positions of canvas1 and canvas2, respectively
// (you're probably tracking these in your Tank objects)
int x1, y1, x2, y2;

// orientation of canvas1 (angle)
// (you're probably tracking this in your Tank objects, too)
double orientation;
// angle available for firing
// (ditto, Tank object)
double angleOfSight;

// vector from canvas1 to canvas2
int cx, cy;
// angle of vector between canvas1 and canvas2
double azimuth;
// can canvas1 hit the center of canvas2?
bool canHit;

// find the vector from canvas1 to canvas2
cx = x2 - x1;
cy = y2 - y1;

// calculate the angle of the vector
azimuth = Math.Atan2(cy, cx);
// correct for Atan range (-pi, pi)
if(azimuth < 0) azimuth += 2*Math.PI;

// determine if canvas1 can hit canvas2
// can eliminate the and (&&) with Math.Abs but this seems more instructive
canHit = (azimuth < orientation + angleOfSight) &&
    (azimuth > orientation - angleOfSight);
Waylon Flinn
I posted some calculations in my post ([UPDATE]). Can you please check them out ? Thanks.
Andreas Grech
+1  A: 

It can be quite easily achieved with the use of a concept in vector math called dot product.

http://en.wikipedia.org/wiki/Dot_product

It may look intimidating, but it's not that bad. This is the most correct way to deal with your FOV issue, and the beauty is that the same math works whether you are dealing with 2D or 3D (that's when you know the solution is correct).

(NOTE: If anything is not clear, just ask in the comment section and I will fill in the missing links.)

Steps:

1) You need two vectors, one is the heading vector of the main tank. Another vector you need is derived from the position of the tank in question and the main tank.

For our discussion, let's assume the heading vector for main tank is (ax, ay) and vector between main tank's position and target tank is (bx, by). For example, if main tank is at location (20, 30) and target tank is at (45, 62), then vector b = (45 - 20, 62 - 30) = (25, 32).

Again, for purpose of discussion, let's assume main tank's heading vector is (3,4).

The main goal here is to find the angle between these two vectors, and dot product can help you get that.

2) Dot product is defined as

a * b = |a||b| cos(angle)

read as a (dot product) b since a and b are not numbers, they are vectors.

3) or expressed another way (after some algebraic manipulation):

angle = acos((a * b) / |a||b|)

angle is the angle between the two vectors a and b, so this info alone can tell you whether one tank can see another or not.

|a| is the magnitude of the vector a, which according to the Pythagoras Theorem, is just sqrt(ax * ax + ay * ay), same goes for |b|.

Now the question comes, how do you find out a * b (a dot product b) in order to find the angle.

4) Here comes the rescue. Turns out that dot product can also be expressed as below:

a * b = ax * bx + ay * by

So angle = acos((ax * bx + ay * by) / |a||b|)

If the angle is less than half of your FOV, then the tank in question is in view. Otherwise it's not.

So using the example numbers above:

Based on our example numbers:

a = (3, 4) b = (25, 32)

|a| = sqrt(3 * 3 + 4 * 4)

|b| = sqrt(25 * 25 + 32 * 32)

angle = acos((20 * 25 + 30 * 32) /|a||b|

(Be sure to convert the resulting angle to degree or radian as appropriate before comparing it to your FOV)

Boon
An adaptation of this approach (if you're smart you can eliminate the cosine inverse) is probably the one I'd employ in practice, if the code was going to be in a tight loop. Otherwise, I think the level of mathematical sophistication makes it prohibitively expensive (maintainability).
Waylon Flinn
It's not expensive as it seems. Once you lay everything down in code, it's less than ten lines of code. It's the explanation that makes it look complicated and intimidating. Inverse cosine can be optimized with other tricks but it's almost never necessary.
Boon
I agree, this code is likely faster. The problem is that it's hard for a mathematical novice to understand. My thought on inverse cosine is that you can remove it by taking the cosine of the FOV/2 once, outside the loop. You can also ensure that 'a' is a unit vector and save a multiplication.
Waylon Flinn
Good call, do cosine on FOV/2 once and avoid having to do inverse cosine each time.
Boon
A: 

Your updated problem seems to come from different "zero" directions of orientation and azimuth: an orientation of 0 seems to mean "straight up", but an azimuth of 0 "straight right".

Svante