views:

772

answers:

12

What are all the types of software testing that you can put in front of the word "Testing"?

Here are some examples:

  • Unit Testing
  • Functional Testing
  • Integration Testing
  • Performance Testing
  • Stress and Volume Testing
  • White Box Testing
  • Black Box Testing
  • User Testing
  • Automated Testing
  • Regression Testing

Let's see if we can come up with good, concise, distinguishing definitions for these.

How about 40 words or fewer for each of these (and others)?

+2  A: 

Black/White box testing is a term to indicate whether you are testing with knowledge of how the underlying system works.

Black box testing means you treat the code as if it is a totally unknown system with just some exposed interfaces.

White box testing is the opposite, where you test with implementation knowledge IN MIND, to more thoroughly test the possible code paths. For example, you know a particular class persists via saving a file... but only under a certain condition not apparent via the interface, so you might test that the file was actually written to disk when the condition occurs.

Mike Stone
+1  A: 

I would reclassify it as

  • Automated Testing - testing performed without intervention
    • integration
    • functional
    • Unit
  • User Testing - users on keyboards banging on it
    • blackbox
    • whitebox
  • Load Testing - ddos kind of attack on site with logging and performance metrics
    • Performance
    • Stress/Volume
DevelopingChris
A: 

Penetration Testing

Network Penetration Testing involves a team attempting to break into your network or servers. This is what jumps into the minds of most firewall people, security admins, operations teams, and IT security groups when you say “penetration test”.

Application Penetration Testing is the other type, and it’s what we tend to be talking about here. An app pentest isn’t usually looking at servers or the network. Instead, an app pentest team is given a piece of software to look at, with the assumption that “the servers we’re running will be fine as long as there is no horrible unknown vulnerability in this program”.

Derek Park
A: 
  • Unit testing:
    These types of tests are used to check for errors in the smaller parts of a program. Most of the time the parts that are checked are individual methods/functions/... The purpose of these tests is to check that the program behaves correctly in any possible situation, ranging from a good set of inputdata to wrong or even malicious input. Of course, the tests are as good as the developer that creates them, and sometimes these unittests will give someone a false sense of safety when the tests are not as thorough as they should be. Unittesting will also rely on assertions. those are facts that must be true and can indicate a problem when they are not validated. Unittests are best separated from the main, real, code of the program, however assertions can be very useful when used inside the main code. Not only does this help the robustness of the code, but also, if done correctly, improve the readability of the code itself.
Sven
+2  A: 

Unit testing is done oftentimes as the code is written (or even before), or immediately after. It tests the classes or functions individually, as a first defence to catch mistakes

Integration testing is done when multiple "units" have been completed. The units are combined, and their combination is tested to verify that the system works as a whole.

Mike Stone
A: 
Sven
A: 

Regression testing is a mechanism to test existing functionality. There is usually a set of tests that are executed with each significant build (or even every build with a continuous build system), and the purpose is to catch bugs that may have been introduced in the existing functionality in the course of fixing other bugs or adding new functionality.

As a side note, the unit test suite can often act as the regression test suite (if they were done as automated tests). In this manner, all unit tests can be executed along with each build to help ensure that no new bugs have been introduced in the existing functionality.

Actually, any automated tests that have been built should be included as part of the regression suite, even if they are executed less often (if they take a long time to run).

Mike Stone
A: 
  • Performance Testing:
    It is critical that the application can hold up when facing a heavy load when in use. Performance testing measures this by simulating this. The heavy load can be og any kind, ranging from a massive amount of users, to heavy network traffic, to massive concurrent database connections/operations. performance testing is done by simulating a heavy-use environment using tools or build in applications provided by the IDE. a nice article for performance tests on websites can be found here
Sven
A: 

Don't forget Interoperability testing, if you need to talk to different systems.

kokos
+2  A: 

Testing is "questioning a product in order to evaluate it" (Bach), or "gathering information about a product with the intention of informing a decision about it" (Weinberg).

Since there is an infinite number of questions that one could ask about a product, there's an infinite number of classifications into which those questions could go. And since there's an intractably large number of people that could apply labels to that infinite number of questions, the resulting list would be (infinite * intractably large). And not very helpful in any way that I could see.

So, alas, in my view it's kind of a silly question.

---Michael B.

Michael Bolton
A: 

I would add one:

Acceptance testing: a small subset of tests that are required to pass before QA accepts the build and start to fully test it. They are usually automated, and used for each new build provided by the developer.

Julien