Usually these entities (grammar rules, parsers, ASTs) have conceptual relations as captured by books such as the Dragon compiler book (already mentioned in a comment).
I don't think these relations are very interesting when it comes to designing a new langauge; all you really care about is the syntax of the language (often expressed as
a "context-free" grammar with additional constraints, and the semantics of the langauge (usually expressed as a very big reference documents, sometimes expressed in a formal notation such as denotational semantics that can interpret an abstract parse tree produced by magic.
"Real" relations occur when you have machinery that ties them together: If I give grammar A to parser generator B, and use the result to process source code S, I may get AST T.
At this level, you aren't designing your langauge so much as implementing it. What you want here is an integrated set of tools for processing your language definition; ideally, it will accept your grammar and semantic notation directly. There aren't any practical tools that do both of these ideally that I know about, so you have to choose among those that do exist.
Two tools that can be used for this to varying degrees of effectiveness are ANTLR and my DMS Software Reengineering Toolkit.
DMS provides at least one way in which semantics can be defined, by providing means for writing down "algebraic" laws of equivalence between language forms. In essence, you can say this language form is equivalent to that langauge form by writing pattern1 = pattern2 just like you do with algebra. You can see how this is done using algebra as an example.