A race condition occurs when 2 or more threads are able to access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any point, you don't know the order at which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are 'racing' to access/change the data.
Often problems occur when one thread does a "check-then-act" (e.g. "check" if the value is X, and then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act".
if(x == 5) //The 'Check'
{
y = x * 2; //The 'Act'
//If x is changed by another thread in between the if(x==5) and the "y=x*5", y will not be equal to 10.
}
The point being, y could be 10, or it could be anything, depending on whether another thread changed x in between the check and act. You have no real way of knowing.
In order to prevent race conditions occuring, typically you would put a lock around the shared data to ensure that only one thread can access the data at a time. This would mean something like this:
//Obtain lock for x
if(x == 5)
{
y = x * 2; //Now, nothing can change x until the lock is released. Therefore y = 10
}
//release lock for x