There are two competing philosophies on this issue.
I'm firmly in the camp of using composite primary keys for certain tables, myself.
When I design a database, I use ER modeling to collect information requirements in one place. Every value to be served up by the database is an instance of an attribute, and every attribute describes a subject matter entity or a relationship among two or more subject matter entites. Foreign keys don't go into the analysis phase.
Before starting database design, I decide how each entity will be identified, from the application perspective. These are going to give me my primary keys. Every table that describes an entity will have a simple primary key, the identifier for the entity. Simple relationships (binary, many-to-one) don't need a table of their own. Every table that describes a complex relationship will have a composite primary key made up of the primary keys of the participating entities.
Foreign keys plug in in the obvious way. Well, obvious to me, at least. This provides an initial table design in 3NF, and maybe higher. Table design might be altered by further normalization or by other design patterns incompatible with normalization (so called denormalization). But this is the first cut at table design.
This design practice results in different results as far as performance and data integrity than the prevailing practice. The prevaling practice puts an autonumber column called "id" in as the first column of every table. This column becomes the primary key.
In essence, this practice uses the SQL table structure to mimic the graph model of data, even if it looks like a relational model. The id column is essentially a surrogate for the row's address. The graph model of data has an upside and a downside. More on this if requested.