First there is the difference between the language and the grammar. A
language is a set of strings. A grammar is a way of describing a set of
strings (one often say that a grammar "generates" the strings). A given
language may be described by several grammars.
The most well known kind of grammar are the production based one. Those
where classified by Chomsky in
unrestricted grammars, where there can be anything on the two sides of the
productions
monotonic grammar, where the left hand side is at most as long as the
right hand side
context-sensitive, where only one non-terminal is expanded
context-free, where the left hand side of productions consist only of one
non terminal
regular grammars, where the left hand side of productions consist only of
one non terminal and right hand side of production may have only one
non-terminal, as the latest element.
Monotonic and context-sensitive grammars are also called type 1 grammars.
They are able to generate the same languages. They are less powerfull than
type 0 grammars. AFAIK, while I've seen proofs that there are languages
which have a type 0 grammar but no type 1 one, I know of no example.
Context-sensitive grammars are called type 2 grammars. They are less
powerfull than type 1 grammar. The standard example of language for which
there is no type 2 grammar but a type 1 grammar is the set of strings
consisting of an equal number of a, b and c, with the a before the b and
the b before the c.
Regular grammar are also called type 3 grammars. They are less powerfull
than type 2 grammars. The standard example of a language for which there
is no type 3 grammar but a type 2 grammar is the set of strings with
correctly matching parenthesis.
Ambiguity in grammars is something outside that hierarchy. A grammar is
ambiguous if a given string can generated in several ways. There are
unambigous type 1 grammars, and there are ambiguous type 3 grammars.
Then there are other kinds of grammars, which aren't part on Chomsky
classification (two levels grammars, attribute grammars, tree adjoining
grammars, ...) even if they are based on productions. Some of these are
even able to describe the semantic of programming languages.
Then there are parsing algorithms. Those often are based on CFG and impose
more restrictions to get better parsing speed (parsing a CSG needs
exponential time, a CFG needs cubic time, common algorithms only linear
time). Those restrictions introduce other classes of grammars.
CSG and monotonic grammars are in fact of little use to describe or compile
a programming language: their global behaviour isn't apparent and is
synthesised from local properties, so they are difficult to understand and
attaching semantic to production is problematic, parsing them is costly --
in general exponential -- and error handling is difficult. The non Chomsky
grammars were introduced to solve these issues.
Back to C++. The standard describes the C++ language with a context-free
grammar but
there are ambiguities (the famous "most vexing parse"). So a compiler
has to recognize the ambiguities and use the right interpretation (i.e.
C x();
is a function declaration, not an object definition).
the grammar is not LR(1) (one of the most well known subset of CFG for
which a linear parsing algorithm exist). Other algorithms (potentially
more costly in time or space) are used, either based on a more general
theory or by tweeking linear one to adapt them to C++ rules. Simplifying
the grammar and rejecting the incorrectly accepted programs in the
semantic analysis is also a possibility.
the correspondance between strings of characters and terminals is
modified (mainly by type and template declarations, the need to take that
into account in template definition has been solved with the use of
typename
and template
for dependent names). This is solved by having
the lexing phase query the symbol table so that a given string will give
a terminal or another depending on the context.
there are additional constraints (need to declarare some identifiers,
type checking, ...) described in a more of less formal variant of
english. This is usually considered semantic even if some more powerfull
grammar descriptions could handle them.