views:

266

answers:

5

The Personal Software Process (PSP) is designed to allow software engineers to understand and improve their performance. The PSP uses scripts to guide a practitioner through the process. Each script defines the purpose, the entry criteria, the steps to perform, and the exit criteria. PSP0 is designed to be a framework that allows for starting a personal process.

One of the scripts used in PSP0 is the Development Script, which is to guide development. This script is used once there is a requirements statement, a project plan summary, time and defect recording logs are made, and a defect type standard is established. The activities of this script are design, code, compile, and test. The script is exited when you have a thoroughly tested application and complete time and defect logs.

In the Code phase, you review the requirements and make a design, record requirements defects in the log, and perform time tracking. In the Compile phase, you compile, fix any compile time errors, and repeat until the program compiles, and record any defects and time. Finally, in the Test phase, you test until all tests run without error and all defects are fixed, while recording time and defects.

My concerns are with how to manage the code, compile, and test phases when using modern programming languages (especially interpreted languages like Python, Perl, and Ruby) and IDEs.

My questions:

  • In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?
  • If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.
  • If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?

It seems like the PSP, at least the PSP0 baseline process, is designed to be used with a compiled language and small applications written using a text editor (and not an IDE). In addition to my questions, I would appreciate the advice and commentary of anyone who is using or has used the PSP.

A: 

It sounds, basically, like your formal process doesn't match your practice process. Step back, re-evaluate what you're doing and whether you should choose a different formal approach (if in fact you need a formal approach to begin with).

Mike Burton
The PSP is a guideline that can be modified. However, I'm not sure HOW to modify it in this particular case. Also, I believe that you need a formal approach if you want to consider what you are doing software engineering. If you don't have a formal approach, you might as well just be a code monkey.
Thomas Owens
I agree. However, I also think that there's very little call for software engineering, and almost none that would use interpreted languages. Software Engineering is for systems with engineering constraints binding them. It's not a useful mental focus for a normal application developer. The latest Spolsky post takes this to its extreme measure, but in general building applications and Engineering Software are not the same kind of activity.
Mike Burton
+1  A: 

In general, as the PSP is a personal improvement process, the answers to your actual questions do not matter as long as you pick one answer and apply it consistently. That way you will be able to measure the times you take in each defined phase, which is what PSP is after. If your team is collectively using the PSP then you should all agree on which scripts to use and how to answer your questions.

My takes on the actual questions are (not that they are relevant):

  • In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?

To me, test time is the time when the actual tests run and not anything else. In this case, both the errors and execution time I'd add as 'compile' time, time which is used in generating and running the code.

  • If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.

Syntax errors are code defects.

  • If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?

If the IDE is part of your toolchain, then it seeing errors is just like yourself having spotted the errors, and thus code errors. If you don't use the IDE regularly, then I'd count them as compile errors.

Vinko Vrsalovic
I'm actually looking for valid takes on the questions. The PSP is supposed to be an adaptable process, and I want to see how others have overcome these questions, as I'm sure I'm not the only one to have them.
Thomas Owens
And I'm trying to tell you that you are giving way too much importance on this relatively minor issue. I'm sure you can see what the alternatives are. Pick any and do it consistently. At the end of the day, it actually doesn't matter. Especially because you'll surely get as many answers as people answering.
Vinko Vrsalovic
A: 
  • In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?

The errors should be categorized according to when they were created, not when you found them.

  • If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.

Same as above. Always go back to the earliest point in time. If the syntax error was introduced while coding, then it corresponds to the coding phase, if it was introduced while fixing a bug, then it's in the defect phase.

  • If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?

I believe that should not be identified. It's just time spent on writing the code.

As a side note, I've used the Process Dashboard tool to track PSP data and found it quite nice. It's free and Java-based, so it should run anywhere. You can get it here: http://processdash.sourceforge.net/

JRL
I've looked at the Process Dashboard, and need to look more closely at it.
Thomas Owens
A: 

After reading the replies by Mike Burton, Vinko Vrsalovic, and JRL and re-reading the appropriate chapters in PSP: A Self-Improvement Process for Software Engineers, I've come up with my own takes on these problems. What is good, however, is that I found a section in the book that I originally missed when two pages stuck together.

  • In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?

According to the book, it says that "if you are using a development environment that does not compile, then you should merely skip the compile step." However, it also says that if you have a build step, "you can record the build time and any build errors under the compile phase".

This means that for interpreted languages, you will either remove the compile phase from the tracking or replace compilation with your build scripts. Because the PSP0 is generally used with small applications (similar to what you would expect in a university lab), I would expect that you would not have a build process and would simply omit the step.

  • If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.

I would record errors where they are located.

For example, if a test case has a defect, that would be a test defect. If the test ran, and an error was found in the application being tested, that would be a code or design defect, depending on where the problem actually originated.

  • If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?

If the IDE identifies a syntax error, it is the same as you actually spotting the error before execution. Using an IDE properly, there are few excuses for letting defects that would affect execution (as in, cause errors to the execution of the application other than logic/implementation errors) through.

Thomas Owens
+1  A: 

I've used PSP for years. As others have said, it is a personal process, and you will need to evolve PSP0 to improve your development process. Nonetheless, our team (all PSP-trained) grappled with these issues on several fronts. Let me give you an idea of the components involved, and then I'll say how we managed.

We had a PowerBuilder "tier"; the PowerBuilder IDE prevents you from even saving your code until it compiles correctly and links. Part of the system used JSP, though the quantity of Java was minor, and boilerplate, so that in practice, we didn't count it at all. A large portion of the system was in JS/JavaScript; this was done before the wonderful Ajax libraries came along, and represented a large portion of the work. The other large portion was Oracle Pl/Sql; this has a somewhat more traditional compile phase.

When working in PowerBuilder, the compile (and link) phase started when the developer saved the object. If the save succeeded, we recorded a compile time of 0. Otherwise, we recorded the time it took for us to fix the error(s) that caused the compile-time defect. Most often, these were defects injected in coding, removed in compile phase.

That forced compile/link aspect of the PowerBuilder IDE forced us to move the code review phase to after compiling. Initially, this caused us some distress, because we weren't sure how/if such a change would affect the meaning of the data. In practice, it became a non-issue. In fact, many of us also moved our Oracle Pl/Sql code reviews to after the compile phase, too, because we found that when reviewing the code, we would often gloss over some syntax errors that the compiler would report.

There is nothing wrong with a compile time of 0, any more than there is anything wrong with a test time of 0 (meaning your unit test passed without detecting errors, and ran significantly quicker than your unit of measure). If those times are zero, then you don't remove any defects in those phases, and you won't encounter a div/0 problem. You could also record a nominal minimum of 1 minute, if that makes you more comfortable, or if your measures require a non-zero value.

Your second question is independent of the development environment. When you encounter a defect, you record which phase you injected it in (typically design or code) and the phase you removed it (typically design/code review, compile or test). That gives you the measure called "leverage", which indicates the relative effectiveness of removing a defect in a particular phase (and supports the "common knowledge" that removing defects sooner is more effective than removing them later in the process). The phase the defect was injected in is its type, i.e., a design or coding defect. The phase the defect is removed in doesn't affect its type.

Similarly, with JS/JavaScript, the compile time is effectively immeasurable. We didn't record any times for compile phase, but then again, we didn't remove any defects in that phase. The bulk of the JS/JavaScript defects were injected in design/coding and removed in design review, code review, or test.

Daniel 'Dang' Griffith

related questions