I think that the Stanford Parser is one of the best and comprehensive NLP tools available for free: not only will it allow you to parse the structural dependencies (to count nouns/adjectives) but it will also give you the grammatical dependencies in the sentence (so you can extract the subject, object, etc). The latter component is something that Python libraries simply cannot do yet (see http://stackoverflow.com/questions/3125926/does-nltk-have-a-tool-for-dependency-parsing) and is probably going to be the most important feature in regards to your software's ability to work with semantics.
If you're interested in Java and Python tools, then Jython is probably the most fun to use for you. I was in the exact same boat, so I wrote this post about using Jython to run the example code provided in the Stanford Parser - I would give it a glance and see what you think: http://blog.gnucom.cc/2010/using-the-stanford-parser-with-jython/
Edit: After reading one of your comments I learned you need to parse 29 Million sentences. I think you could benefit greatly by using pure Java to combine two really powerful technologies: Stanford Parser + Hadoop. Both are written purely in Java and have an extremely rich API that you can use to parse vasts amount of data in a fraction of the time on a cluster of machines. If you don't have the machines, you can use Amazon's EC2 cluster. If you need an example of using Stanford Parser + Hadoop leave a comment for me, and I'll update the post with a URL to my example.