An important principle of denormalization is that it does not sacrifice normalized data. You should always start with a schema that accurately describes your data. As such you should put different kinds of information in different kinds of tables. You should also put as many constraints on your data as you think is reasonable.
All of these goals tend to make queries a teeny bit longer, as you have to join different tables to get the desired information, but with the right names for tables and columns, this shouldn't be a burden from the point of view of readability.
More imporantly, these goals can have an affect on performance. You should monitor your actual load to see if your database is performing adequately. If nearly all of your queries are returning quickly, and you have lots of CPU headroom for more queries, then you're done.
If you find that write queries are taking long, make sure you don't denormalize your data. You will make the database work harder to keep things consistent, since it will have to do many reads followed by many more writes. Instead, you want to look at your indexes. Do you have indexes on columns you rarely query? Do you have indexes that are needed to verify the integrity of an update?
If read queries are your bottleneck, then once again, you want to start by looking at your indexes. Do you need to add an index or two to avoid table scans? If you just can't avoid the table scans, are there any things you could do to make each row smaller, like by reducing the number of characters in a varchar column, or splitting rarely queried columns into another table to be joined upon when they are needed.
If there is a specific slow query that always uses the same join, then that query might benefit from denormalization. First verify that reads on those tables strongly outnumber writes. Determine which columns you need from one table to add to the other. You might want to use a slightly different name to those columns so that it's more obvious that they are from denormalization. Alter your write logic to update both the original table used in the join, and the denormalized fields.
It's important to note that you aren't removing the old table. The problem with denormalized data is that while it accelerates the specific query it was designed for, it tends to complicate other queries. In particular, write queries must do more work to insure that the data remains consistent, either by copying data from table to table, by doing additonal subselects to make sure that the data is valid, or jump over other sorts of hurdles. By keeping the original table, you can leave all your old constraints in place, so at least those original columns are always valid. If you find for some reason that the denormalized columns are out of sync, you can switch back to the original, slower query and everything is valid, and then you can work on ways to rebuild the denormalized data.