The word "semantic" is ambiguous, and you've encountered two slightly different meanings in these different contexts.
The first meaning (your code) is related to how a compiler interprets the code you type. But there are varying degrees of interpretation for this - syntax is one level, where interpretation is simply deciding that n1*n2
means you want to perform multiplication. But there is also a higher level of interpretation here - if n1
is an integer, and n2
is floating point, what is the result? What if I cast it, should it be rounded, truncated, etc? These are "semantic" questions rather than syntactic ones, but someone, somewhere, decided that yes, the compiler can answer these for most people.
They also decided that the compiler has limits to what it can (and should!) interpret. For example, it can decide that casting to an int
is a truncation, not rounding, but it can't decide what you really want when you try to multiply an array by a number.
(Sometimes people decide that they CAN, though. In Python, [1] * 3 == [1,1,1]
.)
The second meaning refers to a much wider scope. If the result of that operation is supposed to be sent to a peripheral device that can take values of 0x000 to 0xFFF, and you multiply 0x7FF by 0x010, clearly you've made a semantic error. The designers of the peripheral device must decide whether, or how, to cope with that. You, as a programmer, could also decide to put in some sanity checks. But the compiler has no idea about these external semantic constraints, or how to enforce them (filter user input? return an error? truncate? wrap?), which is what the second quote is saying.