To prove f(x) is O(x^n), observe that for x >= 1, each 0 <= x^0, x^2, ... x^n <= x^n.
Hence, f(x) <= (n+1) * max(A_0 ... A_n) * x^n
But (n+1) * max(A_0 ... A_n) is a constant with respect to x, so we have our bound[*]
To prove x^n is O(f(x)) is actually quite difficult, since it isn't true unless A_n != 0. But if A_n != 0, required to prove:
x^n is O(An x^n + ... + A0 x^0)
By some theorems about limits that I can't be bothered to state, that's true iff
(1/An) x^n is O(x^n + ... + (A0/An) x^0)
which is true iff
(1/An) x^n - ... - (A0/An) x^0 is O(x^n) [**]
But now the LHS is a polynomial of the form which we just proved is O(x^n) in the first part. QED.
In practice, though, what you actually do is prove some lemmas about the big-O complexity of the sum of two functions with known big-O complexities. Then you just observe that all terms on both sides are O(x^n), and you can ignore the rest.
[*] That's a fudge, actually, since what matters is the comparison of the absolute value of the function. But for large enough x, f(x) has the same sign as A_n, so if that's
negative we just do a similar inequality the other way around.
I don't think you really need any use of the triangle inequality to "escape" the abs, because polynomial functions necessarily are monotonic outside a certain range (that is, they only have finitely many points of inflection), and when considering big-O limits we only care about what happens outside a certain range.
[**] Another fudge, really I should have written the limit constant M on the RHS, and included that when taking terms across to the LHS. OK, so this is only a sketch of a proof.