In proving and disproving Big O questions that explicitly say use the definition to prove and disprove, my question is, is what I am doing correct?
For example you have a question that is g(n) = O(f(n)) ... In order to prove it I was doing the following
g(n) <= C(F(n))
g(n)/F(n) <= C .. then give n=1 and solve for C , which proves it.
The contradiction that I run to when doing this is when i approach a question of disproving this stuff
for example
g(n) =O(F(n)) to disprove it I would do
g(n) >= C(F(n)) and solve for C again . However this leads me to believe that big O can be proved and disproved at once ? which is 100% wrong.
Using real world numbers (Proving)
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 <= C assume n = 1 then C >= 3
Disproving
n^2 + 3 = O(n^2)
(n^2 + 3)/n^2 >= C assume n = 1 then C <= 3
n^2 + 3 = O(n^2)
both of these say that @ n =1 and c = 3 the algorithm is O(n^2) and is NOT O(n^2).
Can anyone help me clarify my confusion and give me a help me learn a good algorithmic way of solving big O questions?