It might also be worthwhile to step back and consider why you want to do this. If you are trying to remove character differences you consider insignificant, you should look at the Unicode collation algorithm. This is the standard way to disregard differences such as case or diacritics when comparing strings for searching or sorting.
If you plan to display the modified text, consider your audience. What you can safely filter away is locale sensitive. In US English, "Igloo" = "igloo", and "resume" = "résumé", but in Turkish, a lower case I is ı (dotless), and in French, cote means quote, côté means side, and côte means coast. So, the collation language determines what differences are significant.
If removing diacritics is the right solution for your application, it is safest to produce your own table to which you explicitly add the characters you want to convert.
A general, automated approach could be devised using Unicode decomposition. With this, you can decompose a character with diacritics to "combining" characters (the diacritic marks) and the base character with which they are combined. Filter out any thing that is a combining character, and you should have the "non-diacritic" ones.
The lack of discrimination in the automated method, however, could have some unexpected effects. I'd recommend a lot of testing on a representative body of text.