Can I automate all type of tests (unit test, ... etc) so that I don't need QA team to do manual tesing? And if no, Why?
Black box testing will never go away as long as human beings are using your product. There is no substitute for the human ability to identify "something weird going on".
Can I automate all type of tests (unit test, ... etc) so that I don't need QA team to do manual tesing?
No
And if no, Why?
Not all the technologies/test are suitable for automation.
Aside from that, if the automatic test code is written by the same individual who wrote the code to be test, then is is possible that the test is missing some important ( yet, not detected by the original developer ) aspect to be validated.
No.
Your QA can be a smaller team, and their purpose becomes finding unexpected things. They get to play with the system in ways that a user might. In ways that programmers didn't expect.
Once they find something, you write an automated test so they don't need to repeat it. But you still need them to find those cases.
Not to mention, they'll find things like typos, usability issues with the UI, hard to read colour combinations, etc
No!
Not everything can be tested by automated testing. Please do not suggest that! User interface aesthetics is something that your QA team should be helping you fix, which automated testing cannot. Just to name one.
One of the reasons for the Vista experience was supposedly that MSFT had cut back on 'mere human' testers in favor of tester-programmers who wrote scripts.
Of course the scripts didn't notice things like the wide range of different themes as you went deeper into the control panel, or the copy estimate dialogs or other 'minor' gui features that made the whole product look crap
Automated testing is only as good as the tests. Theoretically, if you could write a test for every potential interaction the user will make with your system, manual testing would be obsolete. However, that's not practical, at all.
Good testing will reduce the amount of manual testing required by a QA team, but not eliminate it. In addition, good automation tests can help prevent having to retest problems manually, since once a problem is discovered, a good test can be developed to automatically handle that specific scenario in the future.
I would sum it up thusly:
Automated tests are for finding the problems that you know about (incorrect code).
Manual tests are for finding the problems that you don't know about (incorrect design/specs).
If you use manual tests to find problems with the code, that is inefficient. If you use automated tests to find problems with the design, that is inexcusable.
Absolutely not. Automated tests are code. They can have bugs of their own that will mask bugs in the AUT. Also no automated test is capable of asking, "What if I do this?", or capable making an educated guess about where faults are most likely to occur. Automated exploratory testing does not exist.
On the mechanical side of things, it's also generally inefficient to write an automated test that will only be run once. If doing it manually takes less time than developing the test, it's a waste to automate it.
Manual QA, more commonly known as Blackbox QA, is far from dead.
It's true that unit tests and general automated tests can cover at least 90% of code-path testing. What many don't realize is that the last manual 10% can be the some of the most important work a software and hardware organization can do.
Let's take the user interface, for example. A unit test can tell you that a check box is placed in the right location, and turns on and off as expected. What the test can't tell you is that it's horribly bitmapped and looks terrible with the hideous purple and yellow color scheme in the application.
The most important reason for Blackbox QA is that you end up with strong customer advocate inside your organization. Many of these QA people (myself included), have more of a creative background than a programming background. While some may think of this as a failing, these are the people who don't care about how the code works - they care about how the product works. They spend time thinking like a customer, rather than a developer; "Oh, my almost dead iPod is finished syncing, that means I can close my laptop and just let it charge. Yup, and then I'll just pull it out when my machine is asleep (even though I was playing music from it on my computer) and everything will be fine."
Developers and testers know how a product is supposed to work, and all exercise a product to specification. It's a good tester's job to use the product in a careless way, to make sure that bad stuff doesn't happen. Yank a USB drive from a computer while you're copying data, are you crazy?!? Sure, that's a really stupid idea. But people do it all the time. And a good QA person will do just that, to make sure that pulling a hard drive doesn't take a whole system down. Or turning off WiFi when downloading a movie, or syncing music while purchasing new content and then changing your account password and email address at the same time. Or installing an OS on an MP3 player and trying to boot from it, then pulling the player from the system while it's booted from the device (Yup, I did that, and found a really good bug from it).
Joel on Software says "Why QA" much more eloquently than I do - http://www.joelonsoftware.com/items/2010/01/26.html