To your exact question, with such a small schema, to dump the contents of the original Messages table, the denormalized will be faster. The query plan will be smaller and easier to optimize and there will be no join overhead.
In general, it's much, much more complicated.
Whether it's the right thing to do is a question. For that, start with a normalized design but be willing and prepared to denormalize if there's a compelling reason to do so. There are sometimes legit reasons for denormalization, though usually the gains of normalized data offset any performance loss.
Normalized data is easier to maintain and is generally more flexible. For flexibility, having a numeric pkey lets you have multiple people named the same name. You can add more fields to People easily. It's easier to run a report to see all the people in the system without scanning all Messages.
But performance may be a factor. Given the data in the two tables, the database has several options on how to join. It may use either People or Messages as the base table, and how the join is done will affect things (nested loops, hash joins, sort/merge, etc).
But on top of that, normalized can actually be faster. What if your schema is more complicated than you describe? Let's say your People table has 50 fields of HR-related stuff and your Messages table has only a single 20-character message field. If you have a case of two people but 100k messages, denormalized will actually be faster. This is because I/O is the biggest limiting factor of databases. If you were to dump all data in one query, the normalized data will fetch those 50 fields only once and your Messages table will be densely packed with data. In the denormalized version, each row of Messages will contain 51 fields and you'll drastically increase the number of I/Os to get the same result.