Your description makes it sound like you're doing the LU decomposition in-place. That's certainly more efficient from a memory point of view. That means overwriting the matrix values as you perform the decomposition. If that's true, then "being changed and whether to recalculate the factorization" is a moot point. You lose the original matrix when you overwrite it with the LU decomposition.
In the event that you are NOT overwriting the original, it also sounds like you'd want to recalculate the decomposition whenever you give a matrix element a new value. I'd recommend that you not do that. It seems inefficient to me. If a client wanted to alter many values, they probably wouldn't want to pay the cost of another LU decomposition until they were all done.
You can try a factory interface for matrix transformations/decompositions. It's a simple one that will take in a Matrix and return a (decomposed) matrix. You get to keep your original matrix that way; the return value is a new instance. You can change the original and then pass it to the factory to recalculate the LU decomposition. It costs you memory, which can be a problem for very large matricies.
In Java, I'd write it like this:
public interface MatrixDecomposition
{
Matrix decompose(Matrix original);
}
In C++, it'd be a pure virtual function. (Been too long - I can't remember the syntax.)
There are other types of decomposition (e.g., QR, SVD, etc.), so this design will nicely accomodate those when you need them. Just write another implementation for the interface and Bob's your uncle.
Lots of physics problems are characterized by "sparse" matricies, which have a bandwidth of non-zero values clustered around the diagonal and zeroes outside. If you use special techniques that don't store the zero values outside the bandwidth you can solve larger problems in memory.