What are, in your opinion, the worst subjects of widespread ignorance amongst programmers, i.e. things that everyone who aspires to be a professional should know and take seriously, but doesn't?
Ignorance of the fact that it's really important to let your coworkers know when you're ignorant of something!
Especially when working with new colleagues, one of the hardest things I find is trying to figure out what the person knows and doesn't know.
Ignorance of the fact that questions like this should be community wiki.
In Java and C# (leaving aside ref/out):
Myth: "Objects are passed by reference"
Reality: "Objects aren't passed at all; references are passed by value"
There's a significant difference, and it's often ignored :(
Even in C everything is passed by value (including pointers).
EDIT (jonskeet): Judging by the comments, it may be worth referring to my article on C# parameter passing. Hopefully this will reduce confusion rather than increasing it...
Not commenting unintuitive code. Not commenting unintuitive interfaces.
Disregarding coding style in interface code. (I am kind of used to seeing it ignored in code, but it creeps me out when even the interfaces other people have to use don't blend)
Inconsistency in naming, and ignorance of the value of consistent naming.
Mine is when programmers think only about the code and not about the users. I put usability first, and try my best to make everything as easy and intuitive to use as possible.
Unfortunately, some programmers don't do that; e.g., they use non-descriptive labels for fields (or, don't use labels at all), don't plan and think about the interface layout, and the error messages explain things in a technical manner rather than telling the user what they need to do.
If more programmers read books on usability, marketing, and other such concepts (like I do), the software world would be a much better place.
Ignorance that other programmers may need to maintain your code.
I.e.:
- Lack of decent method and variable names
- Lack of comments
- Moronic structure
Oh, people who think all abstractions are bad. You shouldn't use nHibernate, built in ASP.NET functionality and so on because you lose some control. Why don't they just code everything in assembly...
Edit: I should point out I am not saying that you must use these abstractions, just that there is nothing wrong with using them when it makes sense to (e.g. it's foolish to use nHibernate on a very simple site). It's a judgement call on when an abstraction makes sense, I just think some people are ignorant about the benefits it can bring.
These aren't universal, but are very common:
Lack of knowledge (and interest) about what it takes to operate and support the software once delivered.
Failure to appreciate software has no value in and of itself, but only adds value when it is used for something.
Both of which lead to a lack of interest in what happens to the software once it's compiled, tested and released.
No one knows what the heck an MVC is. A lot of people think they know, but they're usually wrong.
In Java : thinking that "synchronized" blocks are only about atomicity when usually the problems are more about visibility
EDIT: Assuming I (Jon Skeet) understand you correctly, this is what I normally talk about as the difference between atomicity and volatility. And yes, it's misunderstood :(
Ignorance of Polymorphism.
Even in Python (with duck typing!) people still try to write
if type(x) == SomeClassIDefined:
x.aMethod(arg1)
elif type(x) == SomeOtherSubclassIDefined:
x.otherMethod(arg1)
else:
x.yetAnother(arg1)
Where, clearly, they should simply rename the three methods to create polymorphic classes. They can eliminate the if and simply
x.renamedMethod(arg1)
Yesterday I saw the "surrogate type check" design pattern.
for arg1 in aBigList:
if someOption == "x":
result = someObject.aMethod( arg1 )
elif someOption == "y":
result = anotherObject.aMethod( arg1 )
else:
result = defaultObject.aMethod( arg1 )
Sigh.
At parameter-parsing main-program-startup time, they should have done this.
if someOption == "x":
theWorkingObject= someObject
elif someOption == "y":
theWorkingObject= anotherObject
else:
theWorkingObject= defaultObject
Then, in the deeply-nested loop they could do this.
for arg1 in aBigList:
result = theWorkingObject.aMethod( arg1 )
Simpler. Faster. Polymorphic. Pythonic.
I agree on the byte<->character issue. There are even several issues in this:
- "I want to store a byte[] in a String, how do I do that?"
- "I want to store Chinese characters in a database, but only ever get
<?>
" - "I use UTF-8, but still only get
<?>
"
The last one is very unfortunate because it comes from half-knowledge. The usual reason for this is that while they use UTF-8 at one point they completely ignore all other places where the encoding would matter.
The lack of desire to continually improve. I've seen a lot of developers get to a certain level of skill and then just stop learning new things. No reading of blogs, journals, books; it's like they reached a certain skill level and went "yep, I know all I need to know now"
The biggest mistake I see new programmers make is trying to prove their code is correct when it obviously isn't. It usually runs like this:
Programmer writes some code. It fails when run.
He then spends several hours staring at his code convincing himself that it's correct.
He asks for help and instead of accepting that it must be wrong, he focuses on why his code must be right and how 'something else' is causing the problem.
My advice: Assume someone else wrote that code and you know it's broken. Find the broken bit...
The attitude that testing is unnecessary or time consuming
If tests aren't written, then there is no way of knowing when some change in the system breaks something elsewhere. Writing tests saves time and money.
In response to Kendall Helmstetter Gelner's comments: testing actually helps refactoring - if you have tests that tell you what the application should do, then when you refactor, those tests should still pass. This is where I have saved many hours of work, after all, the alternative is no tests or doing manual testing for everything, and that is a massive time sink.
I recently became aware that a lot (and I mean a LOT ) of programmers are not familiarised with the inheritance concept and have absolutely no idea why it is useful.
With large datasets being moved between systems in XML, not understanding the merits of SAX over DOM, and the performance implications of selecting DOM simply because it is easier to implement. I have seen a number totally unnecesary performance bottlenecks and system failures over this, with XML getting blamed rather than the lazy parser implementation.
I really don't like it when people are testing the value of a boolean like e.g.
if (isSomeCondition == true)
Not only is it redundant and unnecessarily verbose, but with a language like C# where there's no implicit conversion from e.g. int
to bool
it can actually lead to errors. In C# you cannot make the classic assignment instead of comparison mistake unless you're testing a bool
, so
if (someInt = 0)
will not compile, but
if (isSomeCondition = true)
will (but will probably not yield the desired result). In other words for bools the explicit compare actually makes it easier to make mistakes. Please just use
if (isSomeCondition)
It is easier to read and at least in C# less prone to errors.
It easily has to be that 'Commenting bad code is better than actually refactoring it into good code'
Programmers not trying to learn the difference of decimal and double
Commenting tautologies:
i = 0; // set i to zero // loop column 80 times for(n = 0; n < 80; ++n) putchar(' ');
instead of stating the intent:
// scroll one line for(n = 0; n < 80; ++n) putchar(' ');
Programmers not knowing that conditions can be simplified(DeMorgan'd), e.g. coding multiple negatives:
while (keypress != escape_key && keypress != alt_f4_key && keypress != ctrl_w_key)
instead of the one easier to read:
while (!(keypress == escape_key || keypress == alt_f4_key || keypress == ctrl_w_key))
note: mentally read the construct while(!(...)) as until:
until (keypress == escape_key || keypress == alt_f4_key || keypress == ctrl_w_key)
[EDIT: 2009-09-25] related to this question(simplifying the condition):
Naming negative variables, e.g. Unpaid, NotFound.
Using this:
If Not Unpaid Then
If Not File.NotFound Then
Instead of what can be easily understood:
If Paid Then
If File.Found Then
Thinking that it is OK to swallow an exception:
try {
...
}
catch (Exception e) {
e.printStackTrace();
}
The default Eclipse template does this and so many people just catch a checked exception to get their code to compile and then ignore the ticking NPE.
edit: A post by Reinier reminded me of this one:
if (condition1) {
if (condition2) {
if (condition3) {
if (!condition4) {
if (condition5 || condition6) {
if (condition7) {
if (condition8) {
if (condition9) {
if (condition10) {
// do something important
...
}
}
}
}
}
}
}
}
}
Programmers carrying over habits which may be desirable in one language, but aren't in the new one.
Classic example is seeing C# or Java code like this:
if (5 == someValue)
This is usually written by ex-C or C++ developers who are trying to avoid the typo of:
if (someValue = 5)
which is valid C/C++ (although it generates a warning in most compilers). In C# and Java it's just unnecessary, and I believe most people find it harder to read than the more natural:
if (someValue == 5)
My personal pet peeve (petty but my teeth grind everytime I see it) is verbosely setting booleans, e.g.
bool isValid;
if (percentage >= 0 && percentage <= 100)
isValid = true;
else
isValid = false;
whats wrong with
bool isValid = percentage >= 0 && percentage <= 100;
It's soooooo much more succinct and easier on the eye
Complacency with duplicate code. Two blocks of code which are initially identical are a maintenance headache. They are going to gain differences over time due to being used differently, yet there will be cases where the same fix has to be applied to both similar but non-identical parts. You can try distinguishing after the fact between a fix that should have been applied to the other copy of the code but that was overlooked, and a fix that deliberately wasn't applied to both. It will make your head hurt.
I did code reviews of prospective hires a while back, and realised that the main bar that most applicants needed to get above was nothing fancy - not good Object Orientation, appropriate use of Design Patterns or the like, but just plain old factoring of code into well-named, re-usable methods. I.e. avoiding the "100s of lines of repetitive code in button click handler methods" pattern. This was discovered with "structured programming" in the early 1970s, before most of those applicants were born.
Another Java/.NET one (this SO question is just great for letting off steam...)
Myth: "Value types live on the stack, reference types live on the heap"
Reality: It's more complicated than that.
C# doesn't actually differentiate between the heap/stack behaviour, and the CLR could potentially do funky things with objects which can never escape from the current method. However, taking existing C# behaviour and ignoring special cases like stackalloc and captured variables for anonymous functions and iterator blocks.
First let's talk about variables. Variables have a context - either they're local to a method, or they're static, or they're instance variables as part of either a value type or a reference type.
- Static variables are always on the heap.
- Local variables are always on the stack. (Remember I'm ignoring captured variables here :)
- Instance variables live in the context of their container. For a reference type this will always be on the heap. For a value type it depends...
- The value of a variable is never an object. If it's a reference type variable, the value of the variable is a reference. The reference may well be on the stack (e.g. if it's a local variable) but the object it refers to (if it's non-null) will be on the heap.
- Value parameters are on the stack.
- Reference parameters (i.e. those with ref/out) will vary by caller.
The value of the variable is stored wherever the variable conceptually lives. So an integer variable which is part of an object will always be on the heap (contrary to the myth). A variable which is part of a value type will live wherever that value type instance lives - which may be on the stack (e.g. if the containing instance is the value of a local variable) or on the heap (e.g. if the containing instance is the value of an instance variable in an object).
That's probably a very confusing explanation because I'm rushing to get to lunch, but basically the myth is far too simplistic, partly because it doesn't talk about variables (or more generally expressions) at all. The context is very important.
Performance isn't that much of a problem until performance becomes a problem. No matter how much you talk about premature optimisation people keep on doing it, at all kinds of level- there is nothing virtuous in writing 2000 lines of compiled code when you could have written 20 lines in a dynamic language just to save 20 processor cycles when your processor is running 95% idle anyway.
If the time comes when performance is a problem you can fix it then, but basing all your decisions on the assumption that it will be wastes everybody's time...
I call it "coding from the hip", but it really is a specific category - a better name may be "overimperative programming" or "structureless programming":
The code is structured as one or more function implementations (e.g. of the main function in a C program, or of a set of user interface event handlers in a .NET end user application). The code inside those functions was written line by line, by determining the next thing that needs to be done, writing a line of code to achieve that, and repeating this until done. When a case distinction needs to be made, an if statement is added, then, line by line, all the code for what happens if the condition holds, then the else clause is added, then the code for the then part is copied over and modified until it is the code for the else part. So complex condition checking appears as an arbitrarily large tree of nested ifs. Iterations were traditionally programmed by jumping back to some point with goto, then tweaking until everything seems to work, but Dijkstra's protest againt this has become too strong so now the tweaking is done with for and while loops. All iterations are programmed by explicitly creating arrays for the data to be used for each case, then filling the array using an integer index variable (without explicit numbers we lose track of where we are, don't we?) followed by another such loop to read and apply the data. More complex data are stored in multidimensional arrays or arrays of arrays and structs; other data structures are absent, pointers are a mahjor source of bugs, when used at all. All variables and arrays are treated like an assembly language writer's memory locations, so they are all global, have meaningless names, or incorrect ones due to being randomly repurposed. Correcting index bounds and array overflows are the programmer's main sources of debugging time. Rewriting and extension code is done by scanning for points at which a change or extension is required, then adding and copy-pasting statements and ifs in the usual way, then tweaking the result until it appears to work.
This is not always a result of ignorance: sometimes the programmer got so little time, or so incrementally, that there wasn't time to think about design, Nor is it always bad: if the resulting code is small enough, there may be nothing wrong with it.
The main danger of starting out programmers on a diet of assembler or C (or an similar subset of some other language) is that they fail to proceed beyond this stage. Most of the well-known programming improvements techniques (no goto, sensible naming, advanced data structures, structured programming, libraries with APIs, object-oriented programming, functional programming, layered architecture, patterns, refactoring, etc.) are attempts to help them do this, either by incremental fixing or by starting out in a radically different way.
Treating coding standards as absolutes is my pet peeve. Coding standards are good things that improve readability, but there are always exceptions to the rule. The classic example is the "one return per function" rule. Sure, it's good to limit the number of returns in a function, but there are situations where multiple returns are preferable to contorting your code to use one return.
There's a couple of things. One is the same thing you pointed out - lack of proper understanding of Unicode, assuming that all text is represented by a lit of single-byte characters (as pointed out by the powers-that-be).
The other is developers who don't take the time to actually understand what something does or how the specs are defined, but just work simply by trial and error until they find something that works for their particular position and just use it. Then get surprised when it fails under different input (and often will go off and add convoluted if-else clauses, after more trial-and-error work of course, to handle all these anomalous data).
Oh, and as a corollary to the above - IE. There are so many elegant, powerful techniques that you have to abandon simply because of poor implementation in that browser - and when they do get fixed (it's getting better, I'll admit) you still can't use them for another few years until the majority of the public stops using the buggy versions. IE 8 looks like it will finally allow you to have a cookie string of more than 4k without effectively deleting all cookies - but how long until one can write code without having to guard against it?
My pet peeve around here is treating crashes as "user errors".
We work with quite complex data structures and GUIs, and sometimes users put in the data that triggers some edge case in the model, or uncovers a bug in the code. The program coredumps. Some of my co-workers simply tell the user not to do it any more - end of the problem.
In my opinion, every such case needs to be debugged, and the crash turned into an error message telling the user what's wrong and how to fix it. It's not the user fault if the model can't handle rates below 1% - the model needs to tell the user about its limitations.
Mine is "bytes and characters are NOT the same thing, nor trivially convertible". I can't count how many times I've seen otherwise competent programmers completely ignore the issue of character encodings, misapply them horribly, or do multiple unnecessary and potentially destructive conversions between them.
The worst case I've seen, an overloaded method for handling XML (simplified):
public void setContent(String xml)
{
SAXBuilder builder = new SAXBuilder();
this document = builder.build(
new InputSource(new StringReader(
new String(xml.getBytes(), "UTF-8"))));
}
public void setContent(byte[] xml)
{
this.setContent(new String(xml, "UTF-8"));;
}
Count the number of unnecessary and potentially destructive String/byte[] conversions. Count them!
Depending on the platform default encoding is par of the course for naive Java code, but corrupting the data unless it matches both the platform default encoding and a hardcoded one takes real talent - especially when it would have been less work to just hand the byte[] over to the XML parser and have it use the correct encoding declared in the XML data itself.
I blame it all on the C standard.
Many programmers think that writing unintelligible code that ultimately works somehow shows their genius. It's writing clear, understandable code that makes a good programmer.
A related issue are programmers who change old, unintelligible code without cleaning it up. Or not even really understanding what the old code does, as long as their new addition to it works.
"It seems to work for me, so I won't bother reading manual/specification to do it correctly"
This is why HTML, JavaScript, feeds and HTTP (caching, MIME types) are in such sorry state.
Ignorance of threading
(As applied to .NET/Java; different phases would apply in functional languages, for example.)
I believe developers go through up to 4 phases of threading knowledge:
- Complete ignorance - ignore any possibility of problems. Result: race conditions, weirdness.
- Over-reaction: make every member of every class lock/synchronize. Result: deadlock, code fluff.
- Caution: reapproach the whole problem. Take a long time thinking over any threading issue. Get it right at least some of the time. Live in a state of fear when dealing with threads.
- Nirvana: Instinctively do the right thing.
In my experience the last is more of a theoretical goal than an attainable state.
I'm surprised by how many professional programmers are weak in math. Growing up I just thought that being good at math was a prerequisite for the job. Everyone I knew who was interested in computers was also good at math, so I just made a mental connection without realizing it.
(.NET specific)
Myth: System.Decimal
is a fixed-point type.
Reality: System.Decimal
is a floating decimal point type, as opposed to System.Single
/System.Double
which are floating binary point types.
It didn't help that the MSDN documentation was wrong until .NET 2.0. Many people stood by the documentation, regardless of the fact that the exponent is clearly part of the value :(
In web development, ignorance of proper input sanitation and SQL injection vulnerabilities. In ColdFusion, for example, the language is so easy to learn that it practically welcomes new "programmers" to make this mistake. Much of the beginner documentation reinforced bad usage patterns early on as well. All of the languages that target web development have some kind of SQL injection prevention available, either through a sanitizer of a way to generate prepared statements, but many developers don't know what SQL injection is much less how to prevent it from happening. This leads to defaced sites, increased distribution of malware, and a general tarnishing of the image of web developers as second-class citizens in the programming community.
People who constantly rant about "Code should have more comments in it". If developers spent more time paying attention to sensible naming and a reasonable approach to problems, most comments would be unnecessary. If the code requires comments to explain it, then there is a good chance the code has been badly written.
Developers who concatenate loads of method calls inline eg:
int.Parse(MyMethod(GetValue1(someString, someInt).Property1.ToString(), Convert.ToInt32(GetValue1(someString, someInt).Property2), ((ObjectType)AnotherMethod()).PropertyValue).ToString());
.NET != C++
Saw this yesterday: a programmer wrote some code in VB.NET which passed all parameters ByRef between a few dozen functions. I asked him why he wrote it in that style, and he commented that .NET would make a complete copy of every array parameter before it passed it to another function. I correct him, "yes, it'll make a copy... of the pointer, but not the entire array".
He fought with me on that fact for a few minutes. I decided it wasn't worth my time to "fix" code that wasn't broken, so I left it as is.
So many things are very common. For Example, "Do you know programming in assembler?"
- Confusion between class and object.
- Doubt about when use heap instead stack.
- Problems with Scope.
- To declare boolean variables and after do something this : if (x > 1) a = true else a = false.
- Briliant phrases like "The language is not much important."
And so on.
For Pete's sakes please don't use ALLCAPS for any form of constant in C#. Be it enums or const or ANYTHING. If your IDE doesn't tell if something is a const, you should find a new IDE, or failing that a new hobby/workplace/job.
That skilled programmers have a better value on business code than on technical code.
Affect your better coders to implement your domain model, they'll make it better, and that's the most important point.
Quickly writing really bad code that works, with a plan to refactor later... when it's done e.g. 100 lines of a function that "i will break into 5 smaller later".
If you do that and then try to refactor after it's working, you usually find yourself in a situation when there are two ways: write it all again (because it's too hard to refactor to really nice code) or leave it this way because it's working... and in many cases it's just left in its crappy version.
Inability to take pride in making mistakes. Instead of simply admitting them and moving on, people go to great lengths to cover their tracks.
Mistakes happen. If you meet a "senior something", then that means (before anything else) that this person has made a whole lot of mistakes and learned from them.
So a few years back, I made the hard decision to stand tall for my blunders and it has worked pretty well so far. When I can't find a bug after staring at the screen for more than an hour, I admit defeat and ask a colleague. This helps to avoid creating a bigger mess by "fixing" the bug by hiding it behind a new one.
"He who asks a question is a fool for five minutes. He who doesn't ask a question stays a fool." -- Old Chinese proverb
My pet peeve is developers who religiously attach to their language of choice.
As a practical matter, being a professional developer these days (in almost every space, not just web developers) should mean you are multi-lingual and capable/willing to explore other technologies. If you know your favorite language(s) well enough, you should also know where their limitations lay, and attempt to explore other options, instead of hammering a square peg into a round hole. A senior development position (or really, any development position) should come with the expectation that the developer can adapt and learn to fill the role as needed.
This is not just true of languages, but other technologies (app servers, frameworks, etc) as well.
Just one of the most symbolic examples of ignorance in programming (C#):
private string GetMonth(int Number)
{
switch (Number)
{
case 1: return "January";
case 2: return "February";
//And so on...
default: return "Invalid";
}
}
Happened recently: The problem can not possibly be in my code, it must be in the library!
- New programmers should have hubris writing code but be (very!) humble debugging it!
My pet peeve is code which is written to join a list/array of strings into a comma separated list and they loop round each item appending the comma (or other separater) and then when they get to the end remove the last separator when it can be easily done in a couple of lines (assuming c#).
List<string> result = new List<string>();
// do something if needed
return string.Join(separator, result.ToArray());
:-)
Female programmers can't code?
Another peeve of mine comes from the attitude people have toward female programmers, which include among other things:
- "Programming is too hard for women, since they're obviously emotional rather than logical thinkers"
- "The best female programmer is never better than an average male programmer"
- "Women programmers can't be pretty"
- "Women only choose to be programmers because they are in need of a husband"
- etc, etc, etc
One of the women on my team is a tech lead, and she commented to me the other day interviewing potential employees. Normally, she and one of the male leads would interview candidates together. Consistently, interviewes would speak in very technical terms to the male lead, and dumb it down when they spoke to her. One candidate managed to describe a weird scenario that caused a stackoverflow exception to the male lead, and reiterate it back to her as "a stack overflow is kinda like filling a balloon with too much air, eventually fills up and finally goes POP!"
I don't know if people have had bad experiences in the past, but I've never seen a perceivable difference in coding style or quality programmers between men and women programmers.
Not to comment code.
Seriously, a whole lot of colleagues stated to me, that "hard to write code should be hard to read", when I asked them, why they do not add comments.
I say: "Documentation is like sex. If it's good, it's very, very good. If it's bad, it's better than nothing!"
In order of gravity:
- mindless code duplication, takes first place anytime
- badly written logic that contains so many holes it's like Swiss cheese
- unnecessary complexity
- no comments, or bad ones, making less sense than the code you're trying to understand
Code is not just for communicating with the computer, but also with fellow programmers.
You can throw all sorts of rules on comments and variable names at people, but it really doesn't matter that much. Either they grok the above (and don't need rules except perhaps as guidelines) or they don't (and will only obey the letter of the rules).
People that think it is OK to not comment code because of reason X.
I have heard all kinds of pithy statements like "Comments are lies", "Write more readable code instead of commenting", or "Name your variables and functions correctly and you don't need to comment". Bull hockey! Writing readable code, and using good naming of functions and variables are good ideas. But leaving out comments is not.
I don't know how many times I have had to examine a block of code for minutes/hours trying to figure out what it does and why it does it, when a simple comment would saved me most of my time.
In C#/.NET, I hate the lack of metadata comments on functions and properties. Being able to bring up IntelliSense and find a short set of comments about a function is invaluable to your fellow programmers.
Of course I am guilty of not adequately commenting code throughout my career. I probably wrote some code yesterday that I didn't comment. But the attitude that this is OK for some reason X is completely wrong.
P.S.
I also hate the other school of thought, the "Leave detailed comments on everything" camp. I had a couple of computer science professors like this back in the days of college. If the line count of comments in a function equals that of the code, you have a problem.
Naming. Naming of classes, methods, functions, variables or modules. The name should be simple and easy to understand. And it should actually hint at its intended use. I hate it when I have to stare at some piece of code for much too long just to find out that it does something totally different than its name suggested.
Copying and pasting duplicate code throughout a series of similar classes, rather than using inheritance or composition to put the required functionality in one place. That can be very difficult to refactor!
Hello, my name is Nathan and I am a recovering "No" developer. ( <-- current pet peeve)
I used to hear a request for a feature and I'd say "No!". Then I'd say, "It cannot be done!", or "that's not how the product works".
Finally, worn down as the business guy convinces me that if we cannot do this we'll go out of business, I decide to think about it for a minute and code it up while he's going on and on trying to convince me about why this is such a good idea. I tell him, it'll be in the next release, and he leaves exasperated but happy.
Now, I try to be a "Yes" developer.
note: The business guy is often the developer on your team that wrote the framework you have to use that doesn't quite fit the bug you just got assigned.
Programmers that are so religious towards some programming construct they won't hear the other side. Example: stored procedure zealots.
How does a computer work.
If you are a programmer, you need to know how the hell a computer works. You need knowledge of function and behavior, as far as it concerns computer programs. RAM, CPU, cache, I/O, DMA, PIO, interrupts, etc.
You don't need to know assembly in particular, but concepts like flags, registers, branches, stacks, stack pointer, instruction pointer, memory, pages, DMA, interrupts, semaphore/lock support and things like that must be understood.
I don't care if your language abstracts memory management, if your database framework abstracts disk access or even if you use a framework abstracting distributed computing. It still gets run by computers and suffers from computers limitations, which does impact how your software works.
The idea that using On Error Resume Next means you don't have to check for errors. I have to maintain a cesspit of VBScript and the guy before me just littered On Error Resume Next everywhere, without bothering to do any error checking at all.
The web is stateless and the browser isn't part of your app.
I find a lot of programmers get caught up in their framework of choice and they ignore, forget, or never understood that each request to a web application is like a brand new program running. We (or our framework) have to do a lot of work to maintain state and simulate a "logged in" experience. Some of that work involves having the browser store stuff and give it back to us, but we cannot rely on the browser doing the right thing.
Plus too many programmers don't even know the difference between code that runs on the server and code that runs in the browser. I have had arguments with programmers who insist that ColdFusion, PHP, or VBScript code that is inline in an HTML page is run in the browser.
--
bmb
One of my biggies is that many programmers don't understand internationalization. Even when an app is supposedly built with it in mind, it's usually not done right.
- There will be string concatenation to form sentences that might work fine in English, but the word order is wrong in other languages (should be using some kind of templated substitution). Or they'll blindly add "s" to the end of a word to make it plural -- that doesn't even work well in English, much less other languages.
- They'll support multiple currencies (and currency symbols), but assume the decimal marker is always a period or that the currency symbol always goes at the front of the number.
- Dates will be written in an ambiguous order (usually the American way of month/day/year).
- I know I've seen more of these, but I can't think of them right now... maybe I'll edit it as I think of them.
I didn't have time to read more than the first page of answers, so please pardon me if this is a duplicate.
The myth that writing the code is the main part, while debuggin is just an extra.
They are both faces of the same coin. If one is shitty, the overall result will suck.
Poorly named variables/classes/methods where the programmer is trying to stay within some artificial 8 character limit. This combined with extremely verbose (and often unnecessary) comments is one of my biggest pet peeves.
Believing that catching and ignoring exceptions means preventing a bug. In most cases the exception means that there is a bug in the code. Ignoring the exception is just like looking the other way. This is especially true if the code catches the base class Exception.
Or to put it another way: some people seem to be more willing to let the application continue in a undefined and possible illegal state than accepting the fact that there's a bug in the code, which should be addressed.
People don't understand the possibilities of disjoint-union types, perhaps because the support in C, C++, and Java is so abominable, and in a dynamically typed scripting language, everything is a disjoint-union type, so they disappear into the woodwork. Programming with disjoint unions and pattern matching is one of the great pleasures of a language like F#, Haskell, or ML.
Reinventing the wheel when microdesigning a components of an enterprise application. Examples from J2EE apps : various custom ways to read a property file, component-specific custom made logging, database connection pool written from scratch, various attemps to custom build an authentication mechanisms, etc. Always wondered what was wrong with standard means we already had for these tasks...
Inexperienced programmers are apt to believe the terrible fallacy that there is such a thing as placeholder code, which they dutifully demarcate from the rest of the project with a giant TODO comment. Such code is generally rife with half-assed algorithms, lousy variable names, random comments, ugly formatting, and scads of corner-case bugs.
What these programmers have yet to realize is that on a commercial software project, the schedule pressures will eliminate any time to go back and clean up that code. When the testers find bugs in that functionality, the programmers will have only enough time to apply the minimally-invasive fix for each bug. Eventually, that placeholder code will have survived weeks of testing and therefore deemed a low priority for refactoring.
Nobody expects beautiful, bug-free code. Just try not to commit any code to the source tree that you wouldn't want to ship in the final product.
Lazy naming conventions
I can't stand it when programmers try to take short cuts when naming their methods and variables.
AA is not an acceptable variable name.
Being descriptive saves time later when you have to re-read your code or if someone else has to figure out what you wrote.
PS. Related to this is putting good descriptions infront of your mthods. You have no reason not to in VS2005+ it practically does it for you if you hit ''' infront of the method name.
Most programmers don't seem to realize that any database product based around SQL is not a relational database. Somehow the whole concept of a relational database gets smeared because of how awful SQL is. Web developers now want to use new untested database paradigms because they just can't stand the idea of using a "relational" (that is, an SQL-based) database. Go ahead and read the SQL standard and try and find any occurance of the word "Relation" or "Relational"
In reality, there has never been a mainstream relational database. There's a couple research programs (like rel) that implement the relational concepts. But it's all got this kind of grampa's suspenders air about it, that nobody wants to touch, because it's just not hip to be mathematically and logically rigorous nowadays.
Programmers that are commissioned to write an enhancement, but end up rewriting the program because they "don't like" the way the original was written.
My Application Owns the Computer
A pervasive attitude among programmers is that the only reason people own a computer is to run their application. Symptoms include:
Usurping shared resources like the desktop, system folders, task bar, registry, ... ("The whole machine is for my use.")
Can't turn off the app ("The only reason the machine would be on is to run my app, so I'll install an auto-startup service, a startup app, an Explorer plug-in, ...").
Resource hogging ("I can just grab exclusive access to files, database, or network connections when I launch and keep everything open.")
Interrupts workflow with pop-up messages, tooltips, alert balloons, taskbar messages, status messages, sound effects, ... ("Look at me! I'm working! Do you see me? I'm doing something!!")
Collateral damage ("I don't use that so I'll delete it.")
Race conditions ("Anything I do will stay that way forever until I change it.")
Security breaches ("I can expose everything on the machine, since I am the only one that will ever access it.")
Lack of interoperability ("My app has everything it needs so I don't need to support file export or cut and paste.")
No deployment ("I will never have to update or uninstall my app; they'll just get a new machine.")
Inner-Platform Effect (app takes over standard OS functionality like network connections, user authentication, deploying third-party software, UI look and feel, et al.)
Unintuitive variable names!. God I hate it.
Have you ever read someone's code, (if it's looking for a bug it's even worse), and wondering what the hell "nfi" (NumberFormatInfo), "par" (parameter), "mkAtt" (make attribute) mean? I even saw XML yesterday containing data with attributes n and v (for name and value) and a comment above that n stands for name and v stands for value...
people, if you've got intellisense, why are you so afraid to write a full, understandable name?
I admit I'm a bit obsessed with nice variable names, but it's just because it's sooo easy to read if you write your code properly.
I prefer
foreach (string parameterId in idsToNames.Keys)
to
foreach (string key in parameters.Keys)
or
foreach (string p in pars.Keys)
Far and away: construction-time optimization.
I can't count the number of times that I've encountered early loop termination, strange handling of variables, direct access to class members, breakdown of hierarchies, etc... generated in the name of optimization. This seems to be one of those things that every book on programming mentions and that nobody follows.
People, if you're writing a net-centric app, or doing heavy DB accesses, or waiting for a user, particulary in multithreaded apps, you will be spending FAR more time waiting for networked I/O than processing data. With that kind of performance profile, any sort of optimization at all will be essentially unnoticed in terms of performance. It's much more important to write code your mother can read. In this mindset, optimization is fine - you can return early from that linear search through a list if the looked for element is the second one you see - but it must be simple and obvious. Think of how much money and stress you can save yourself, and your company, if any time person B picks up person A's code, person B can understand it on the first read.
Selective Standards Religion
A developer will crusade for the standards of their chosen technology stack, and completely ignore or even disparage standards outside their primary focus.
Web Developer: "Most DB people are clueless! They don't know the first thing about CSS. Most just use tables to position everything! Haven't these people heard of Standards?"
"What difference does it make whether I use the SQL standard or Product X's proprietary command to retrieve the report data? I get the same result, don't I? I don't even need to worry about the database - my ORM deals with all that."
Backend Developer/DBA: "These UI scripters can't even spell 'relationship'. If even one of them knows the definition of third-normal-form, I'd be stunned."
"The scripters keep nagging me about changing my sales report pages to support their niche browser - why can't they just get with the program and use Browser Y?"
Note - these are examples, and are by no means comprehensive.
The moral of the story is to understand that most technology areas have standards, and you will only improve your skills and value by learning them. Even when you choose to go against the standard, you will be doing it from a position of knowledge, not ignorance.
I have three: Web programmers who don't know HTML and Javascript, but instead depend on frameworks that they don't really understand - they don't know what the framework is producing on the client.
Application programmers who don't understand that the computer doesn't run their source code (they don't understand the compilation/interpretation step)
SQL programmers who don't write SQL - ie. they write procedural languages using SQL syntax.
I suppose I could go on forever with this, but those are the top three
Short "else" clause after a long "if" clause, especially when the else just throws an exception. I prefer to detect the error case first and throw the exception which tends to limit nesting of subsequent code.
Thinking that customers know what they want
Don't take the customer's words literally. Understand the problem, talk about it with others, think of many creative solutions, and implement the solution that works best for most users.
Label1, Label2, ... Label126, ...
Button1, Button2, ...
ooohhhhh ... I just want to smack somebody! ;-)
if .... else-if .... else-if .... else
with a nesting hierarchy of four to ten levels, spanning several thousands of lines.
Your complete permutation of all branching logic in a method/function.
Hello, ever heard of polymorphism? Wait, you don't even know how to derive classes?
Displaying a message box instead of raising an exception when a method fails to do it's job. For example, a Save() method in a Form simply showing a message box, instead of raising an exception, because the user hasn't filled in some required field, etc.
Because they don't raise an exception, any code calling the Save method has no freaking idea that the Save failed or why it failed!
Typically at this point I'd expect at least one person to say that exceptions should be used "exceptionally", i.e. rarely. If you follow this philosophy then you still need some way to tell your calling code that you failed, which results in changing your method signature so it returns failure details either as a result or an out parameter, etc. And of-course your calling code will need to tell it's calling code that it failed and so on. Ahh hello world, this is exactly what exceptions are built for!
Maybe this thinking doesn't work in all frameworks (like web, etc) but in Delphi Windows applications it's perfect as unhandled exceptions don't crash the application, once they travel back to the main message loop the app simply shows a presentable message box to the user with the error details, they click OK and program flow continues to process messages again.
...the single worst subject of widespread ignorance amongst programmers...
- "the business domain doesn't matter" aka "the business reasons don't matter" aka "the business is none of my business"
Overengineering, usually to make unnecessary optimizations.
These are usually done by seniour developers. These usually add a lot of complexity with adding minimal (if any at all) speed improvements. What's worse, is that after these are done, someone other unlucky developer gets stuck with the "optimized" code.
In C#, when Visual Studio's default names are left in, and I have to figure out what button23 does, and why it reads from TextBox13 by flipping back and forth between the code and visual views of Form1.cs.
Programmers who write help pages thus:-
This page allows you to add a foo. To add a foo, enter the name of your new foo in the field labeled "Foo Name". Select a the type of the foo from the list and click "Save".
Lack of focus on the customer and their needs.
If I'm aspiring to be a professional then there are many soft skills I should acquire along with the technical skills. Formost among these the ability to develop an effective working relationship with my customer, whether internal or external.
I might write the best code in the world, but if it isn't what the customer needed then I'm not doing a professional job.
My pet peeve is how quickly we all forget the customer as soon as we click on that icon for our IDE.
My biggest pet peeve is developers who think that because they put together a solution for small company x that
- said solution will scale to all scenarios
- they are now architects, where real architects (I'm not one) are a whole other breed
A smaller subset is developers who are too lazy to learn and insist on coming to a senior (read: busier) developer to get them to solve ALL their problems. (key point ALL, of course they should rely on the hot shots for help with hard problems)
My pet peeve, one that I care for and nurture has got to be "Well we have done it like that for years". Technology moves on, so should programmers. I don't mean use the new version because it is the new version. I used to hear it a lot from a VB programmer that clung onto VB6 with a vengeance. He didn't want to leave the bloated, dated and very slow VB app that he had due to it being perfectly good when it wrote it X years ago.
People who still use Hungarian Notation for variables, like strName and dblAmount, in strongly typed, reflection-rich languages like c# and java, even if these days there are powerful IDEs and intellisense and etc.
So far in my years of development I have found that I resent most those programmers who can't keep deadlines. It's OK to go over because of some unforeseen trouble, but to look into the eye and say:"It will be finished tomorrow" and then start coding next week is not acceptable.
Use of inheritance when composition or external functions are more appropriate
(I would have thought someone else would have brought this up by now, but I browsed through the four pages already up and didn't see it - apologies if I missed it.)
It is still so common to see someone think along the lines of, "I need a string that also lets me do X", so they inherit from string and add their X method.
Or, I want a queue class that works in a multithreaded environment. Inherit from Queue (or whatever you have) and added overloads that aquire and release locks.
In the first instance it is more appropriate to have an free function (or static a method in languages that don't have free functions) that takes the string as a parameter (along with any other parameters) and work with the public interface.
In the second, write your threaded queue as a new class that contains the raw queue class, and expose the interface that is appropriate. Sometimes this involves a lot of forwarding methods - but that in itself should not be the reason for chosing inheritance.
Inheritance should be reserved for the case where your new class has a superset interface (could be the same), and for callers who only see the static type of the base class the behaviour should make sense (so it is substituteable in the Liskov sense). Furthermore, at least some (some would say all) of the method would necessarily depend on some of the protected state/ interface. That is - if you could implement all new methods using only the public interface, you are not changing the behaviour of existing methods for base class clients, and no new state is introduce, why do you need to inherit?
As an aside, some languages support constructs such as C#'s extension methods, which can also be more appropriate in some cases, and also open to mis-use - but that's another subject.
Ignorance of thoroughness
"That isn't a condition I should account for, the user should never do that".
"I just write new code, other parts of the development cycle aren't my job (analysis, testing, planning, documenting)".
"I just get the job done. I don't worry about the fact that someone will have to continue to maintain this code, or that business rules can change".
How did I come to think developers are ignorant of thoroughness? Because I've made plenty of those mistakes myself!
The idea that "intuitive" interfaces can actually exist. Sorry, but every interface is learned.
Although it is true that this idea usually comes to me from a business analyst...
Using a collection of If conditions instead of regular expression. I already saw a +1000 line function that could be reduced to 2 regular expressions.
- Windows people who still think Linux is command-line only.
- Linux people who complain about Windows not being distributed with build tools, not realising that for some years MS has made an extremely high-quality, highly standard-conforming and highly optimizing C/C++ compiler and IDE available for free.
Not using loop continuation statements. Imagine, a multi-screen method, that continues to indent near-endlessly.
while (someCondition) {
if (myOtherCondition) {
if (yetAnotherCondition) {
doSomethingWorthwhile();
}
}
}
versus
while (someCondition) {
if (!(myOtherCondition && yetAnotherCondition))
continue;
doSomethingWorthwhile();
}
In no specific order:
People who pick a certain technology for a project just to get it on their resume. We have this one unmaintainable app at work because the developer (who left about 6 months after starting the project) decided he needed to write it in something neither he nor anybody else knew.
People who believe that there is one true way to do something and that everybody else who doesn't agree is either stupid, ignorant, or a heretic.
Premature/Nonsensical optimization strategies. This was taken to its extreme by one of my former co-workers at a Java job (I love Java, this has nothing to do with the language). He refused to use interfaces, non-final methods, or non-final classes. He believed everything (EVERYTHING) should be cached to the extent that he wouldn't create objects and would cache even the simplest things. He believed that all this made his code "more performant" (is that even a word?!). Of course, he wouldn't cite any sort of proof that this was the way to do it, nor would he prove that his code needed this optimization in the first place. Code like this is a joy to test, by the way.
Job title as a defense. Same job as the last part, we had this guy who was convinced that java.lang.String had some kind of bug in it. When I tried to point out that it might be his code instead, he started in with the "I'm a Lead Developer, and if I say it's in String, that's where it is. You need to follow me. I'M A LEAD DEVELOPER!"
I've found that a lot of programmers don't know about the for loop. They'd rather use:
Dim i as Integer = 0
Do Until i > 10
'do stuff
i = i + 1
Loop
And when I tried to let one know about the for loop he got mad and said he wasn't going to rewrite all his code just to use a different kind of loop.
The belief that functional programming is new and the belief that functional programming is the end-all be-all to programming.
My pet peeve is a sort of brain-washing that most programmers don't even realize has happened to them - namely that the von Neumann machine is the only paradigm that is available when developing applications. The first data processing applications using machinery were what was called "unit record", and involved data (punched cards) flowing between processing stations, and the early computers were just another type of station in such networks. However, as time went on, computers became more powerful. Also the von Neumann architecture had so many successes, both practical and theoretical, that people came to believe this was the way computers had to be! However, complex applications are extremely difficult to get right, especially in the area of asynchronous processing - which is exactly what the von Neumann machine has trouble with! On the other hand, since supposedly computers can do anything, if people are having trouble getting them to work, it has to be the fault of the programmers, not the paradigm... Now, we can see the von Neumann paradigm starting to run out of steam, and programmers are going to have to be deprogrammed, and "go back to the future" - to what is both an earlier, and a more powerful, paradigm - namely that of data chunks flowing between multiple cores, multiple computers, multiple networks, world-wide, and 24/7. Paradoxically, based on our experience with FBP and similar technologies, we are finding that such systems both perform better, and are easier to develop and maintain.
Blowing off restrictions/constraints for a framework/library/API/subsystem that are clearly spelled out by its authors in its documentation is a big problem. As a corollary to this, I would include: simply not even bothering to read the documentation.
Here are some examples of mistakes I see cause problems over and over again in Java programming:
- Concurrency issues (e.g. modifying things that are non-reentrant from the wrong thread)
- Failing to close things properly; a.) either totally omitting the call due to extreme laziness, or b.) failing to structure it in a properly constructed try-catch-finally statement.
- Mishandling exceptions; a.) putting things in a catch (Exception ex) statement block that belong in finally, b.) doing catch (Exception ex) and then doing nothing in the body, c.) failing to call a logger with a warn/error/fatal when catching-not-rethrowing ex, d.) logging an exception, re-throwing it, and logging the same exception with same msg, e.) throwing unchecked exceptions and not documenting they are thrown in javadoc, f.) failing to throw a nested exception and instead just throwing a different exception.
- Calling a print/println method in production code when using a logger is needed
- Not aborting start up of an application when an error is detected in configuration
- Syntax errors in generated/entered HTML code; HTML has been out a decade and a half, people - learn the language and use a validator
- Generating/entering XML with syntax errors in it, not validating it, and not turning on validation in the XML parser being used
- Testing HTML with a particular browser - and completely overlooking the fact that it has lots of errors in it because it "looks right" in that version of that browser.
- Spelling errors in comments (decent IDE's point these out now) or worse, in a method name
- Declaring a parameter an Object or something too general to use and then down-casting the argument value received. Ninety-five percent of the time, the need for a downcast could have been avoided by using a different approach: overload the method, use generics, keep track of the type being passed by correctly structuring the control flow and data flow of the program, etc.
- Using flags and type codes instead of polymorphism in object-oriented langauges
- Checking for null AFTER dereferencing the value passed as an argument
- Failing to code defensively by checking arguments for correct values: boundary checks, null checks, size checks, and string length checks (empty or too long)
- Processing input from CSV (comma-separated value) data files and not handling the myriad special cases: apostrophe(s) in data, double quote mark(s) in data, commas in data, etc.
- Concatenating values directly into SQL statements instead of using prepared statements with placeholders and storing the values into it that way. Concatenation is generally an evil way to build SQL statements because of the reasons cited above for CSV data. Plus, has no one heard of "SQL injection vulnerability"?
- Calling System.exit method from deep within the bowels of an application, especially one that does not contain resource closing/releasing code in finally clauses of try statements.
- Overlooking the fact that ThreadDeath should never be caught (if you catch it, rethrow it) and usually neither should Interrupted Execution unless you consciously are handling it.
- C/C++ code that makes JNI calls but fails to check the JNI status after each such call as is required by the spec - and, boy, do they mean it!
- Violating the crap out of the architectural constraints/rules for EJBs.
- C++ code that throws something insane (like false) instead of a proper exception-explaining class that documents what when wrong; or on the flip side, using catch (...) to catch exceptions. Doing both in the same program really bugs me, pun intended.
- Putting code in a static initializer that can fail and not paying attention to the weird context it executes in that will vex anyone trying to debug what you wrote when exceptions occur. Try it - not so happy times, eh?
- Writing code such as listed above and then blaming the language, the GUI classes, the SQL server, the J2EE server, the compiler, the operating system, the JDBC drivers, and/or the third party software libraries for your application's slow and/or unreliable behavior.
Those are a few of my least favorite things.
In a more general vein, I have lost track of how many times I have seen code with bugs in it because someone copied code blindly from somewhere else that supposedly "did something similar" to what they were trying to do.
Copy-paste programming without an understanding any deeper than the name of a function and how many arguments to pass it can get your software product into a world of trouble!
The place I see this happen the most often as I work on Java programs is with concurrency issues.
Sun made it way too easy to create a thread in Java - and way too hard, relatively speaking, to detect/prevent cases where some yokel has violated a constraint.
Fortunately, you can check for this problem using AspectJ. There are plenty of good examples of how to do this in books, online articles/tutorials on the web, etc. Program the aspect in a .aj source file not a .java source file. Then, your application proper will not need to be compiled with the AspectJ compiler in general. Only when you want to have the aspect be in effect do you need to use the AspectJ compiler.
Hmmmm... I guess I have seen a lot of defects occur a lot of times and cause a lot of problems.
I don't think any else has posted this - I hate "over inheritance" where the class hierarchy is 8 or 9 classes deep. I've seen code like this written by fairly experienced people and I think it's caused by combination of a naive view of what inheritance is for and an unwillingness to refactor base classes to make better use of encapsulation instead.
My favorite one is that linked lists are quicker for adding and removing items in the middle than array lists. So many people fail to grasp the subtler concept and give the canned answer that everyone seems to propagate. This is in Java in particular, but the pet peeve applies to the concept in general. Say you have list.remove(2000) in a list of 4000 items, they claim it will be quicker in a linked list than in an array list. What they forget about is how long it will take the above call to find the 2000th item ( O(n) ) and then remove it ( O(1) ). The iteration will be done in Java code many times over. With an array list, it will be a low-level memory copy which, while is o(n) as well, will be quicker in most cases than iterating a linked list.
Oh, just remembered another one...
In C# and Java at least, '? :' isn't named "the ternary operator". It's the conditional operator. It happens to be a ternary operator (in that it has three operands) and it happens to be the only ternary operator at the moment, but that's not its name, nor does it describe the purpose of the operator. It's what many people call it, but that doesn't make it the correct name for it. (Many people spell my name "John", but it's still "Jon" however many people do that.)
If either language ever gains a second ternary operator (it's possible) then all articles/answers/books etc which refer to the conditional operator as "the ternary operator" will become ambiguous.
Yes, this is very much a pedantic peeve, but it still irritates me. I blame book and tutorial authors who've been spreading the non-name "ternary" for years :(
Yet another one: the popular language Sun released in the 90s is called Java, not JAVA. It's not an acronym. There's no need to shout. Grrr.
(I'm thinking of changing my middle name to "grumpy old man".)
That rebooting is a first line of defense solution.
This applies to DBAs too. I've encountered more than my share of programmers/DBAs that think nothing of rebooting a production system to fix things. In fact, I am cringing right now. =)
While this may fix things, it is tremendously disruptive and only used as a last resort.
Ignorance of a programming language's order of operations. I've had programmers ask me why something like 10 + 20 * 3 - 5 was resulting in 65 instead of 85. I take one look at it and shake my head. On a related note, I always try to use parentheses liberally.
Ignorance of the principles of reusable code and parameters. A ColdFusion developer I inherited code from had made several pages with names like getWallProducts.cfm, getFloorProducts.cfm, getCountertopProducts.cfm, getBacksplashProducts.cfm, etc. Each of the pages was absolutely identical except for the WHERE clause in one SQL query.
Using string concatenation rather than parameters for SQL:
v_sql = "select cust_age from customer where cust_id = " + v_cust_id
Ignoring all performance aspects for the sake of quick and dirty code. We all know the axiom "premature optimization is the root of all evil". However, there's a middle ground between premature optimization and writing code with abysmal performance characteristics.
No, you don't need to spend hours tweaking your SQL queries to wring an extra millisecond out of them if you're running a tiny application with a mostly idle system, but you DO need to avoid things like "SELECT * FROM table" just because it was easier to code it that way. Things like this work great in a test/dev system, but what about in the real world where someone will be running with 100 or 1000x as much data in the db? Same goes for any code you write.
Take performance into account enough to recognize when you're artificially creating a bottleneck. This is an area where an ounce of prevention is definitely worth a pound of cure...
My code is so clear I don't need comments - that drives me nuts. No matter how good you are or how clean the code, comments are still helpful. Even with great variable names, clear formatting, and avoidance of "clever" hacks, sometimes it can still be unclear why code is written a certain way. Maybe you make use of an API such that's not readily apparent why you need to code something a certain way. Maybe you're testing something with unusual conditional statements that wouldn't be clear to another programmer. Whatever the reason, it's good practice to leave comments whether for yourself or someone else, that explain anything that's not very obvious, standard code.
At an absolute minimum, it's helpful to include things like function/method comments that explain what parameters are passed and what return value is expected as well as potential error conditions. Failing to do this because "my code is clear so I don't need comments" is just being lazy, ignorant, or both.
In a similar vein, deciding documentation isn't important because it's boring - that just results in a high "bus factor" where the loss of a single person can cripple the ability for the team to maintain code. This is great for job retention, not great for smooth development, and especially terrible for an open source project where the sharing of the code is an integral part of the ecosystem. Code access is not a substitute for documentation.
I am a programmer and I do not need to know how to write a gramatically correct email. I also do not need to communicate with a customer on the phone - it is someone else's job. The only skills that matter are related to programming and nothing, nothing else.
Hate this!
Using global variables as a mechanism for parameter passing. I have seen this mainly in VB6 projects. When you open a project you are greeted with a page of global variable declarations. And the functions usually dont take parameters.
This sucks big time because:
- it breaks encapsulation (the function now has external dependencies)
- it is reinventing the wheel (there already is a mechanism for parameter passing)
- it is usually undocumented (the caller needs occult knowledge)
In C++:
Myth: std::getline
is a global function.
Reality: std::getline
is not a global function, but a function defined in the namespace std
.
There is a common believing that things that are defined in namespaces other than the global namespace are all global. But in fact, that can cause confusion as to not knowing where stuff is really defined.
Here is an example where to avoid confusion: Instead of saying such things as
1)
global int variables are initialized to zero if an initializer is omitted.
Say the following, which is more correct and probably is what you really want to say
1)
namespace scope int variables are initialized to zero if an initializer is omitted.
Note that just because there is no "
namespace... { ... }
"
around the global scope doesn't mean that there isn't a global namespace: This namespace is not user defined. It's implicitly created by the compiler before anything else happens.
In C and C++:
Myth: Arrays are just pointers
Reality: Pointers are very different. Arrays are converted to pointers implicitly and often.
An array is a block of elements, while a pointer points to such a block. As this simple observation shows, they can't be the same. And an array also can't be some special kind of pointer, because as it is a block of elements, it doesn't point to somewhere.
Nevertheless, I often see people write that arrays are just pointers pointing to a block of memory. That yields to the fact that people try to pass arrays like if they were pointers. Two dimensional arrays are tried to pass like
void myarray(int **array) { array[i][j]...; } // wrong!
while believing if they have two dimensions, they have a pointer to a pointer. In reality, an array itself consists of the elements, it does not point to them:
int a[4][3]; sizeof(a) == 4 * sizeof(int[3])
There are many contexts in which one needs to address a certain element of it. This is where it converts to a pointer implicitly (i.e without programmers writing it). When passing the array to a function, the function receives a pointer to the first element that points to the array. That pointer is made up by the compiler as a temporary. The two dimensional array of above, would thus be passed like this:
void myarray(int (*array)[3]);
Which makes the parameter a pointer to the first element of it, namely a pointer to the first 3-elements array.
Arrays in function parameters
The programmer may declare a parameter to be an array, like in the following example.
void f(char s[]) { }
This will confuse the crap out of a programmer, at first. Because as the programmer works with the parameter, he finds out that it is actually a pointer. And he is right - it is a pointer, despite being declared as an array. The compiler doesn't care that you told him it's an array, it will make s
a pointer anyway. As a consequence, any size you specify in the brackets is silently ignored, too. In this case and only in this case, s
is a pointer and you can say it points to memory, because it itself does not own the data you access through it.
Variable Length Arrays (VLA)
Note that C99 introduced variable length arrays, which your compiler may silently support, too. These arrays have a size that isn't known at compile time. Their type is called a variably modified type, and these arrays can be used only for parameters or non-static, local variables (automatic storage duration). Here is an example
void f(int n) {
int a[n];
}
In this case, the rule is the same as for non-variable length arrays. The array here will convert implicitly to int*
(its element type) when required - just as with the other array above. Also, sizeof behaves correctly, and would yield in this case
sizeof(a) == n * sizeof(int)
They are not declared as pointers just because they have a size determined at runtime!
Related answers
- @Rob Kennedy put an answer that contains diagrams, and complements this answer nicely: Passing an array
- @Eddie put an answer explaining the matter further, also with a nice diagram and code alongside: How does an array of pointers to pointers work
Many programmers seem to think that their only audience is the compiler, when code is really written for other programmers. Compilers have no taste. It's an odd kind of prose, but it's for people to read. Tell me a story.
When a developer has no idea how to set up their own machine.
I've worked for a few companies now where a programmer is hired and the machine they are given is generic so it needs Visual Studio, SQL, etc. set up on it. Even when handed install media and/or a place on the network to get the installers from, many developers cannot figure out how to install the tools they need or have no idea what they need to install.
Worst case scenario this is proof that you have hired the wrong person, best case scenario they're actually a brilliant programmer who just so happens to have never had to install their own tools before. It pretty much cements the idea that they don't code at home.
Some of this though could be because I'm a snot who doesn't trust others to set things up right
IT: "So, what all do you need installed on this thing?"
ME: "Please just let me have the machine already, I'll put what I need on there"
The difference between "I need to get this done" and "I need to get this done here" (as in I need to add code in this specific location). By far the biggest issue I have encountered in scaling systems up is where code written by various people puts a lot of logic that should live in separate levels of abstraction in a single place.
Copying code from another application they've worked on containing the functionality they want to use, and not changing the variable names (that only make sense in context of the original application) because "the client will never see the code."
Oy. Do I try to explain that the client can and will see the code in any variety of instances, or that this will drive fellow team members crazy/confused, or that the PM will have a conniption when she requests full documentation of the system and sees processes named after other clients' products?
Always starts by writing concrete classes instead of starting to "program by interface".
Cowboys who just want to write code before they have finished understanding and then debugging their business rules & requirements. Once you have finished slashing your business requirements and rules with Occam's' Razor the code, modules, libraries, data structure etc. that you need will be bleedingly obvious.
Horse first, then cart.
Reinventing the wheel.
As in: instead of using 30 minutes to look up a standard, textbook-ish solution (using an actual textbook, Google, or whatever) – first use 25 minutes to design your own solution (because it's somehow less boring; see also NIH), then use another 25 minutes to make your solution compile, then use 1:45 to prevent it from crashing when you just try it, then use 3.5 days for some additional fixes based on integration testing (or whatever it is that you do), and finally spend weeks processing bug reports and log files / stack dumps / whatever that you get from the customers.
This is a real, live, production example that I uncovered in code that I needed to maintain in my professional capacity.
I printed it out and kept it on my wall as a trophy for some time.
function isValid(form){
if(checkUser(form.username))
{
// if(checkPass(form.password))
// {
if(checkName(form.name))
{
if(checkCompany(form.company))
{
if(checkCompany(form.email))
{
if(isNotEmpty(form.phone))
{
if(isNotEmpty(form.address))
{
if(isNotEmpty(form.city))
{
// if(isNotEmpty(form.state))
// {
if(isNotEmpty(form.zip))
{
if(isNotEmpty(form.country))
{
if(isNotEmpty(form.url))
{
if(isNotEmpty(form.payto))
{
if(checkForm(form.terms))
{
if(checkForm(form.spam))
{
//alert("Thank you form is processing");
return true;
}
}
}
}
}
}
// }
}
}
}
}
}
// }
}
}
//alert("processing failed");
return false;
}
I'm not a perfect programmer and there are a lot of things I don't know. But for all my imperfections; I care, I do my best, and I always try to figure out how to do it better next time.
.. but programmers who just don't care drive me nuts.
The programmer who thinks that I have no clue how memory management works because I've been working in a language with a garbage collector for 8 years.
The n00b who we just hired that hasn't seen a language without a garbage collector telling me I'm too uptight because I worry about how memory is being allocated and freed.
In our case (long running, high load processes), it's critical we pay attention to how much memory we're allocating and when that memory is going to be freed. Just because the actual collection will be done for me doesn't mean I can bury my head in the sand.
Anyone who calls "SQL Server" "SQL". One is a product of Microsoft, the other is not.
The most annoying thing I have come across are developers that truly believe that if the code builds then it is working and production quality!
My Pet Peeve?
Undocumented code. All the rest can be solved or worked around.
Most of my "favorites" are already up here, but here's one I just ran into again last week (from an otherwise decent programmer):
Traversing the ENTIRE XML DOM tree, when searching for a specific node (or nodes), using methods such as Children[], NextSibling(), etc.... instead of a simple call to SelectSingleNode (or SelectNodes) with a simple XPath expression. This of course resulted in many recursive calls, not to mention HORRENDOUS performance...
Of course, this can be generalize as "not using code the way it is meant to be used".
Unreadable code. And large, flat LabVIEW block diagrams that take a couple thousand pixels in both directions. And bland and ugly UI's. And noisy workplaces. And knowledge silos. (What? we can only have one?)
I find that all too many business application programmers are abysmally ignorant of data meaning, database design and data access. Virtually every business application has a database backend, a programmer who doesn't understand how to efficiently and effectively query it will have a badly performing product that users don't want to use.
A developer who doesn't bother to learn database design principles before actually designing a database will cause problems in the applications for years to come.
Further their ignorance often results in data integrity issues - meaning the data is unreliable or meaningless and thus the application is irrelevant or queries that are so poorly designed they provide the wrong results.
Another problem is the developer who thinks that saving a minute of his precious development time is more important than wasting hours of the users' time every day. Programmers should spend all day every day for a week actually using their applications. They would change how they design them.
Bad or incorrect knowledge of data structures.
"I need to find all untranslated strings in our source. I'll just build an array of all the strings, copy it and compare them to eachother."
Congrats on your n-squared solution. Some folks with modern CS degrees don't even know what a hash-map does. Or why you would ever use one as opposed to an array or list etc...
Drives me nuts.
Professional programmers who don't understand free software (open source) licenses, and yet either use the code without regard to what the license says, or make blatantly false statements that simply reading the license in question would fix. Now, there are a lot of licenses out there, so it's not necessary to understand the intricacies of every single one, but if you are going to use or discuss one of the most common licenses (GPL, LGPL, BSD, or MIT), you should at least have a decent idea of the basic requirements of that license.
I have found GPL'd software in proprietary code bases with all license notices stripped off. I have seen people assert that because it's free software and they have the source code in front of them they should be able to do whatever they want with it.
On the other hand, there are the people who make blatantly false statements about licenses without having ever actually read the license in question; for instance, asserting that the GPL is viral, or that your code must be under the GPL if you link to GPL'd code.
Just for the record, since I have seen a lot of this confusion recently, the GPL doesn't force you to do anything; it does not infect your code. It is simply a license; that is, it is a set of terms which, if followed, give you permission to copy and distribute that GPL'd code. Those conditions include having no restrictions beyond those of the GPL on code you distribute that is based on (which basically means linked to) the GPL'd code. Your code can be licensed under any GPL compatible license, and if you remove the GPL'd code (including support for linking to it, unless it's the LGPL), then you can go back to using any license you want.
In a dynamic language, not using Duck Typing and littering the code with tonnes of switch statements!
Comparing floating point values from different calculations without using an epsilon.
Example:
if(sin(x) == cos(y)){ /* do something */ }
instead of
/* here epsilon is taken as 0.001 */
if(abs(sin(x) - cos(y)) <= 0.001){ /* do something */ }
Coders who don't know the advantages of keeping code within 80-columns.
I have many, but this one makes me want to hurt myself:
"...but it was working before."
Users of any application - or device - who call you up and say "It doesn't work".
The Tabs vs. Spaces debate. I personally don't care which to use (I'm not picking any sides - they all have their pros and cons, and modern IDEs will do whatever policy is selected for you), I just hate working on a project where it is not consistent between developers, and I'm constantly asking myself "do I need to switch my editor policy for this file?", and toggling my "show whitespace character" setting. If you're ever starting a new project with new developers - pick one, put it in your coding style guidelines, and make sure everyone sticks to it, or watch out when you have to modify someone else's stuff - and don't complain if you join a project and the current policy is different than what you prefer and try to prove that "tabs are better than spaces", or the reverse - it'll just make everyone mad, make you sound arrogant, and you'll be bantering over something that is not productive. You can banter about it when you're deciding which to use at the beginning, but after that - leave it alone!
Oh - if you prefer to use tabs - use it only for indenting, and use spaces for alignment. Those who use tabs for alignment bug me.
Programmers who do not unit test their code and then get upset with QA when bugs are found which obviously demonstrate this fact.
When people don't know what "Unit Testing" means
I've heard the phrase Unit Testing thrown around and since I'm a developer I think of stuff like Nunit and so forth but then it hit me that when I hear QA and Managers say "Unit Testing" they're referring to actually just doing the testing, maybe from a set list ("do A then B then C and the output should be D, if it's not then shit's broke").
So then to avoid confusion I started using the phrase Automated Unit Testing to refer to what I'm calling Unit Testing as a developer - which worked fine right up until the managers thought that I was referring to generating Unit Tests (the lists, I guess) automatically and the QA people thought I was trying to automate them out of a job.
I guess I'll just call it "NUnit Tests" and be done with it. Well, until we get migrated to TFS at least.
The remark "reinvent the wheel". Look around you, do you see one size of the wheel fitting all?
When people say "Oh, this is so simple, I know it works, there's no point of writing a test for it".
They are completely missing the point of the test, it isn't just to verify that it works, it's to verify that it still works when people make changes down the road.
People who think "Real-time" is fast.
Usually it takes more time to make sure that tasks are achieved in time.
Ignoring the latest community libraries/techinques, and continuing to develop software the way people did ten years ago.
JavaScript is the same as Java right? They both have Java in the name, so they must be the same.
People who really believe that Object Oriented Programming is the end all be all of programming, and completely disregard anything else. I'm a Clojure and Haskell programmer. People like that are extremely annoying, and extremely blind.
Revert 'small' refactoring because running the unit tests before committing takes too long.
People who know only one language, that "can do everything". And every problem they face as if they are using their "can do everything" language and never stop to see what else can bee done in the others paradigms.
My peeve is programmers that don't consider the memory footprint of their software. They develop code using STL or other data structures and they automatically pick a set or map instead of a vector or deque.
Gnome software does this a lot. One of the data structures provided is a GTree that uses GNodes that have five pointers each! Some people use this to store data items smaller than the nodes!
Now imagine what this does when built with 64-bit pointers.
When programmers confuse classes and instances when talking about systems. The word object often gets used ambiguously to refer to classes and instances in both casual and formal conversations about architecture and software engineering.
For me the worst thing someone who wants to be a programmer can ignore is the need of commenting. Everything else doesn't matter so much, as in a normal workplace somebody else can fix the error. While those issues may be time consuming, not commenting can make any of the necessary fixes or changes take much longer than necessary since any developer would have to figure out what the code does before they can make any changes. Some would say that this isn't an issue if the same developer who wrote it is making the changes, but even then people won't recognize what the code does. This is why commenting is so important. Sadly, I know of a graduate director who has only ever written one comment in 147,000 lines of code and most of it doesn't work the way it should.
Emulating sets with dictionaries, or failing to see when a set would be appropriate.
Setting the value in a dictionary to some random value instead of just using a set data structure.
d[key] = 1 # now `key` is in the set!!
Or general ignorance of set data structures, using lists/arrays instead.
Sets offer O(1) lookup and awesome operations like union, intersection, difference. Sets are applied less often than they are applicable.
Asking a perfectly legitimate question on Stack Overflow that I and numerious programmers would love to discuss and debate over, only to have it closed within minutes by trigger happy individuals.
Myth: End-users can all think like programmers, or else they're incompetent.
Reality: No, they can't. And no, they aren't.
The type of behaviour a programmer expects, versus the type of behaviour an end-user expects, can be vastly different. Trying to explain the rationale so that they understand what's happening and why, rather than fixing the "bug", is not always the ideal method of dealing with this.
If the end-user needs any degree of programming experience to use your program, chances are you're doing it wrong.
Developers that go "solve" the customer requirements by creating some reusable framework or system that is completely unnecessary for what the customer asked for.
I find this usually done by very intelligent, but bored people would who much rather work on something interesting to them then actually solve the problem for the customer.
Caught, but unhandled exceptions
Nothing bugs me more than coming across a try...catch block that doesn't have any exception handling or propagation.
Example:
Try
'do something here
Catch ex as Exception
End Try
Lack of proper exception propagation has cost our dev team countless hours of debugging.
Claiming that things like if
/else
vs. switch
will obviously improve the performance of a program.
Because it's premature optimization, but also because people making such claims don't know what they're talking about, and yet they feel the need to teach other people how everything works.
Other examples:
- Thinking that you just have to add some mutual exclusion, and voilà you now have a faster program on multiprocessor machine. It's especially funny if the person telling you that synchronizes threads with busy waiting, and is proud of it.
- Thinking that a garbage collector necessarily slows down a program.
Ignorance of one's own limitations. I can't stand it when someone thinks they know everything there is to know about a topic and give useless or harmful "advice" to someone else.
Drag and drop programming.
My teacher once tried to show off some Ajax in ASP.NET. It started by talking about how the "Ajax would stop the postback of the HTML to the server". He said that Ajax is asynchronous because "The server doesn't have to wait for the client to respond". He also didn't understand that an Ajax request is no different than any other HTTP request. He basically thought it was the XML itself that was sending the request.
Of course I was a bit surprised. I then realized why he did not understand Ajax! He was just drag and dropping UpdatePanels and controls on the page. There is nothing wrong with drag and drop, it can save a lot of time and money. But it's no excuse to not understand the underlying technology, especially when teaching it to others.
Not putting the extra effort to be clean when committing code... Nothing annoys me more than people who commit print statements, extra spaces or end-of-line characters as a result of their debugging.
Changing code, but not updating comments
I come across code sometimes that has evolved over time, but the people who worked on it didn't update the comments around the code - so that the comments refer to what was there before, and not the current code.
Misleading comments in the code are worse than no comments at all.
My pet peeve is poor variable naming! Having spent some time as a maintenance programmer, I can tell you that poor variable naming is pure Hell! You should never name things after yourself, after your pets, or after anything that does not relate to what it is and/or what it is doing in your code! I should never see anything like:
if (john == 0){
return fido;
else
return fluffy;
5 days from now, no one will know what you were doing, let alone 5 years from now!
I didn't/You can't because the house rules of development say you can't!
I can't count how many times I've been told I couldn't write something a certain way, or been told by someone that they didn't write something a certain way, because some preexisting "house rule" said it wasn't allowed. I don't think anything irks me more than encountering a "rule" that prevents me from implementing something in the most efficient, concise, effective, clear, maintainable way possible because it is either:
- Not a "standard" for the company
- Not allowed because of some wiley rule...usually written decades ago
- Not proper coding style for Java, despite the fact that its C# code
- Starting type members with a capital...?
- Using properties...??
- Using LINQ or lambda expressions, or anything 'functional'...???
- Not kosher because its "too new" .NET technology, despite the fact that its
been out FOR YEARS, and has been thoroughly vetted by the monstrous Microsoft
developer community of MILLIONS.
- .NET 3.0/WCF/WPF
- C# 3.0 features: LINQ, anonymous types, lambda expressions, etc.
These kinds of things just REEK of ignorance. People get so attached to their "purist" or "conformist" ways sometimes that they miss the long- and short-term time, effort, and money saving advantages that are just dangling right in front of their noses.
C# sucks because Microsoft destroyed its original intent as an object-oriented purist language.
I get this one from Java OO purists a lot (no offense). No matter how much I stress the point, provide solid resources (i.e. Eric Lippert's Fabulous Adventures In Coding), or actually demonstrate it...some people refuse to accept the simple fact that C# is a language designed for component development. It was never designed as an OO purist language in which functional features such as lambda expressions, the var keyword, LINQ, etc. are bad things that ruin the language.
Wake up, people! C# was never designed as a pure object-oriented language. Use it for what it is, and reap the benefits!
In C++:
Myth: NULL
is not always zero, but it depends on the null pointer address.
Reality: NULL
is always zero, and it's not an address. It's an integer.
Many confuse NULL
with an address, and think therefor it's not necessarily zero if the platform has a different null pointer address.
But NULL
is always zero and it is not an address. It's an zero constant integer expression that can be converted to pointer types. When converted, the pointer created is called a "null pointer", and its value is not necessarily the lowest address of the system. But this has nothing to do with the value of NULL
.
somefunction(){
try { .... put your whole code here .... }
catch{}; // empty catchy!!
}
Developers who don't understand .NET naming conventions.
Examples (from a real library that I had to use):
public delegate void FooBarVoidVoidDelegate(); //FooBar is the component name.
public enum FieldTypeEnum {
OBJECT_NAME,
FIELD_SIZE
}
public delegate void ConnectedDelegate(object sender_, ConnectedEventArgs args_);
"When I have time at the end of the project" , "Later", and "At the end" refer to a non-mythical point in time.
"This is just a hack, but I'll just put it in as a placeholder and fix it later."
"I know the code isn't commented, I go back and add those in at the end."
"I'll update the database documentation when I have time at the end of the project."
Correllary: "Version 2-The Refactoring" is just as mythical
Myth: "This is just a quick and dirty implementation so we can hit the release date, we'll come back and fix it in the next version"
Fact: The re-factored version of "quick and dirty implementations" never make it to the top of the priority list. Ever.
Any Usability Failure is the User's fault
Wrong! If the user can't figure it out, the developer is doing something wrong.
Corollary: Documenting a non-intuitive UI makes it okay
I know it is uncommon to have to quadruple left click an icon to change make it visible, but it is clearly spelled out in the manual. RTFM!
Users Read
"I am a user and I do not read."
Said Johnny User McSnead.
I do not read dialogs, whether large or small.
I'd not read a help file if it were forty feet tall.
When they don't heed text littered all over the screens.
"It is all your fault", the developer screams.
Using your app isn't my job
so make it wordless you programmer snob!
The Moral of the story: Your job as a programmer is not to create a better class of users, but to make programs that accommodate the users you have. Writing an app that ignores the fact that users don't read (whatever the reason) is one that fails in terms of usability.
Incomplete/inaccurate/missing documentation
I know this was partially answered before, but mine has two parts:
When I inherit code from another programmer, I'd like to see what the intent was behind the code. When I started my current job, I inherited a number of classes that were heavily dependent upon a knowledge of specialized business math. In order to make corrections, I had to spend a lot of time with senior management learning what the math was supposed to be, why, and how it would look on paper (immediately got documented). With proper documentation, my time spent would have been cut by 75%.
Component documentation. My company spent hundreds of dollars for your company's components a few months ago as they have the most functionality that we can use. Now you have a new release, and the documentation is incomplete and inaccurate because you did an overhaul on your methods and properties. Now I have to spend multiple hours figuring out what's wrong with my code because of your changes. Now I have to hunt through source code and keep playing with different switches because it no longer works properly. Please, is it too hard to document not just the change log, but also the help files? I know it's not. All you have to do is show sales a noose and ask them to back off their artificial deadline before the deadline gets used on them (not really, ignore last part).
As a part of number 2, why don't component developers ever document custom exceptions their code can throw? Why do I have to waste more hours testing every case to find out what exceptions can get thrown? Because I don't have the time to find every obscure exception buried 6 or 7 layers deep in the component code that might hit me, I'm forced to generic handling (and yes, this is based on my reality, no exaggeration or whining intended).
Ignorance of the runtime model
I was once criticized by my java team lead for putting too many methods in a class - "the objects will take up too much memory!".
People who refuse to learn an IDE
It takes effort to get used to, but if "a text editor with syntax highlighting" for you, you're coding at about 40% efficiency and making hard-to-detect mistakes when refactoring. Even if you're using an IDE, if you haven't put in the work to learn its features beyond IntelliSense, you're doing the same, only with higher memory usage.
If you don't know how to set a conditional breakpoint or watch expression, you're doing it wrong. If you rename variables by changing the name and seeing the errors the compiler spits out, you're doing it wrong. If you undo more than 10 steps instead of using syntax-aware history, you're doing it wrong. In fact, if you ever get a compiler error, you need a better IDE that reports them as you type.
Did you know that a conditional breakpoint in Visual Studio can be used to execute arbitrary code at that point (and the breakpoint will never trigger)? Did you know that JDT's auto-correct can fix typos in variable names? Did you know that in two clicks in ReSharper you can change a string built using concatenation into a string.Format call?
For part of this, I blame Visual Studio -- the actual editor is what a lot of people who don't use IDEs complain about -- slow, and not that feature-rich. Learn 50% the features of Eclipse JDT and you'll type 80% less code, or your money back. ReSharper is close, but it's still not as good as JDT and it's expensive. Vanilla VS's editor is a bloated text editor (but its debugging tools and integration with MS stuff is terrific).
Second up are people who don't know regular expressions. I think I may have put a regex into code once or twice, but a quick find-and-replace regex in your editor with a couple capturing groups can save you loads of repetitive typing and prevent bugs.
Source files that contain numerous commented out iterations of code with no comments
Recently I came across a project that had files that contained multiple version of similar logic commented out one after the other with no reasons as to why or even when the code was deemed non functional.
Worse the actually running code also didn't meet the project requirements. Worse still they did have source control setup to keep track of different version.
For example:
// Working! (not)
If($something)
{
$sql .= "AND (select top 1 x from y where x = $z) = 1";
$sql .= "OR (select top 1 x from p where x != $z) = 1";
}
/*
// Updated by bob
some alternative code
*/
/*
// this code was no good (joe)
some more alternative code
*/
/*
// original verion by (paul)
obviously the original version that also didn't work
*/
Another pet peeve for me is programmers who confuse the meaning of the concept of inheritance in object oriented programming with the biological meaning of the word.
For example, people who call the superclass the "parent class" and who call the subclass the "child class", or who write code examples like this:
class Parent {
// ...
}
class Child extends Parent {
// ...
}
Ofcourse, inheritance in OO programming means "specialization", and there is an is a relationship between instances of a subclass and instances of a superclass.
By writing class Child extends Parent
you are saying "a Child is a Parent", which is obviously wrong.
Not just programmers (though they are unfortunately very much represented in this group), but I'm annoyed by people who don't understand or appreciate the role of research in driving progress in technology (I say this as an industrial programmer, not a researcher, btw) and don't understand how long it takes for something to go from an idea to mainstream reality, or how long a history each "new hot technology" really has.
Fanboy-ism
Developers who don't become complacent but actually become fanboys of some arbitrary language. They often refuse to discuss the shortcomings of their chosen favorite langauge or discuss advantages of alternative languages.
Resisting Lower Level Understanding
Developers who work in high level languages, tools, or toolkits that provide capabilities like JIT, gabage collection, database access, file I/O, and dynamic typing and refuse to work in situations that require them to understand such menial tasks. It's like an author that doesn't know how to write in MLA format because their editor/publisher does that for them.
They just develop skill in one language and try to do everything with the same, they don't try to understand that there are scenarios when C++ should be preferred over C#. They don't think to go out of the Box.
I find myself particularly irritated when a I encounter programmers who act as if documenting bugs is enough to get along instead of fixing them. Those people who even argue against those who discover bugs in their codes. Even defending defective implementation with a "working as designed" attitude.
I hate the "known problems" sections in readme files.
The My-app-should-run-in-full-screen-by-default-even-if-you-have-a-30-inch-monitor-attitude.
I'm not sure if I'd call this "programmer ignorance" but it's definitely a pet peeve:
Code which passes data around in unsuitable, non-specific types.
This to me is one of those code smells that will tell you fairly quickly that the code you're looking at is really bad.
A prime example that springs to mind is some code I was looking at that passed IP addresses (specifically IP addresses mind - not DNS names) as strings. These IP addresses were passed through multiple layers of the application all as strings.
Result of course is error prone code - with it being entirely unclear where the invariant (that you have a valid IP address) should be checked.
Of course what should have been done is that the user entered IP address should have been converted as soon as possible (usually within the UI 'layer') to an IP address structure/class (probably IPEndPoint on .NET). That way the location for validating the address is self evident, and you don't have to repeatedly worry about a broken invariant since IPEndPoint cannot have an invalid address.
Additionally, by the same reasoning, when there is no preexisting class/structure that can be used to represent the data, the developer should consider creating one rather than 'making do' with an unsuitable type that does not enforce invariants.
Developers not understanding the Liskov substitution principle.
To me, this is another bad one - and one which leads to misunderstanding and misuse of inheritance.
A lot of developers are taught to choose between inheritance and aggregation via the "Is-a versus Has-a" relationship, but then still end up misusing inheritance.
Recent example - developer inherits from a class that represents a TCP connection in order to create a class that handles communication with a device over a TCP. I initially tried to explain to the developer that his assertion that his 'device connection' 'is a' 'TCP connection' was false. I couldn't get it through to him, so I tried the alternative tack of explaining to him Liskov's substitution principle, and that unless the inheriting class could support the parent class's interface etc. etc., that he was fundamentally misusing inheritance.
Unfortunately, I've seen this kind of misunderstanding way too often, and this was not a junior inexperienced developer.
I tend to dislike any notion that programming is keyboarding and that if you aren't typing you aren't programming.
It calls to mind a physics student I worked with. He would stare off into space for a bit, then he'd write down the solution. By the time he started writing he had already solved the problem.
A little whiteboarding before keyboarding can save more typing (and debugging, etc) than anything else you might do. Programmers are not data entry monkeys.
The belief "What I think is good is always best"
I've worked with many programmers who believe they know what's best for everyone because it's what's best for them. The "best" code is the best solution to the code user's problem. It may not be the most elegant, fastest, easiest to maintain, coolest, easiest to read, best documented, most advanced, newest technology, etc.
Unfortunately too many employers don't spell out their requirements for good decision making so I can't really complain too much.
Ignorance of how to use a debugger
I see so many people waste time trying to figure out what is going wrong in an app by using print statements or blindly changing the code, when the problem could be found by spending 30 seconds in the debugger, stepping through and examining values.
Similarly, a core file is not just the unsightly mess left when your app defecates in fear before dying. There is valuable information inside that will, in most cases, lead you directly to the source of the crash in very little time.
This isn't just a problem with young programmers, either. Many people start doing things one way, never bother to learn the better way, and rise to very senior positions without knowing how to save themselves hours of work by running their app in a debugger. I weep to think of how many millions of dollars have been wasted over the years by these people. (OK, that's a little over the top, I admit. But this is about pet peeves, so pardon me for getting a little worked up.)
I blame the schools: I got my CS degree at a very reputable university, but using a debugger didn't appear in the curriculum for the intro-level courses. At most, it was mentioned as an afterthought. When I later worked as a tutor for the walk-in CS tutoring center on campus, gradually became aware of this problem and started teaching the students who came in about how to solve their problems using the debugger. Later, as a TA, I created a short seminar about debugging practices in an attempt to help the situation. However, now as a practicing Software Engineer, I still find that I have to instruct other very experienced engineers in the use of a debugger.
"It doesn't matter that Java doesn't have feature X - because with a little bit of coding around (in all the places where X would be used) achieves the same effect"
Where X can be:
- closures
- continuations
- an OO metamodel
- anything else
It's another way of saying "I get paid to do Java and it's a general purpose programming language so I don't need to bother to learn anything else"
Failure to use source control, or failure to use it sensibly.
Source control is the memory of your project. If you don't check anything in, you're living in an enforced state of amnesia. You've heard of those people who have head injuries and lose the ability to store things into long-term memory? That's you without source control.
On the flip side, if you keep commented-out code in your current source files because "I might need it someday," you're that guy who doesn't trust himself to remember anything and goes around wearing an upside-down nametag.
And if you're working on a team where each member has different standards about what gets checked into version control at what stage of development, you're a normal person (who can remember your 5th grade teacher's haircut but not where you put your car keys last night).
[Slight tangent: we recently had a developer get very irate with our (competent) sysadmin because the latter had redeployed a script from the source repository. Turns out the developer hadn't checked any of his changes into SVN in the six months (!!!) he'd been working for us. This is what happens when a dev gets hired by the marketing team.]
This is really anal, but I abhor the use of NULL to describe the character that is used to mark the end of a string in c.
NULL is associated with pointers.
NUL is the name of the ASCII character that represents '\0'
My pet peeve is programmers who don't try to understanding something, they have the attitude that the "compiler" will figure it out for them.
Humility. Eric Evans said it in his forward to Jimmy Nilsson's book Apply Domain Driven Design and Patterns, the best coders have the rare combination of self-confidence and humility.
I find many developers have plenty of self-confidence however do not take well to good criticism. Dunno whether this can be blamed on ignorance of human nature.
There are a lot of good answers here already. Here are my top four:
Continuous Learning: It's one of my most important interview areas. If you aren't learning anymore, you shouldn't be in this business.
Arrogance: I have no patience for a developer that say something should be done that way "because it's the only way".
Over-commenting: If your code is written so that it can be maintained by others, it doesn't need comments describing each line.
Consuming Errors: Putting a try/catch around each section or just returning from each function when an error condition occurs isn't HANDLING an error. It takes a lot more time to track down a bug if an error condition is consumed.
Ignoring the knowledge created and work developed before they could type.
Very often, programmers think that anything older than them is old fashioned or outright invalid. This is a big mistake. Although software development moves extremely quickly, many great steps were taken decades ago and have not been superseded at all. Ignoring that is stupid.
I can see this sometimes when programmers (usually young ones) are asked about books or other reference material. They seem oblivious to what was written before 2004 or something like that. There are great books today, but there is a lot of crap too. Ten or fifteen years ago publishers were much more selective, and fashion was not that important a factor to sell computer-related books.
Now you'll think I am a melancholic pensioner. Ha! :-)
The belief that the ease of writing code in a language is a more important quality of the language than the ease of reading code in that language.
Using ".Net is the future" as the primary justification for rewriting existing working software. The same developer once described it as "God's will".
I read 4 pages of that and I really need to post... (hope no reposts;))
I hate when developers forget that they write soft to BE usable by people who are not programmers, and not taking into account that for a non-programmer smth may be difficult to grasp..
Also 'index base programming' - a fantastic paradigm I have kept on rediscovering in many lines of code eg.
List<int> linesChecked
List<Rectangle> drawnRectangles
List<whatever> something
and then orchestrating all the lists using one index because the things are ESSENTIALLY a single object. duh
A third.... not leaving the default: in switch statement... it's there because something CAN really happen, one can recompile the enum, whatever.... duh (an infinite source of bugs for me:)
if (someValue == true){
return true;
} else {
return false;
}
Instead of
return someValue;
Test by try/catch
This must be bad for my heart.
Example is in J. Suppose two vectors. The dyad ,.
means stitch, a.k.a. align these two vectors side-by-side in a matrix. Now suppose that for some reason, you have vectorA and vectorB, and you know that vectorB can be one off. I've seen this in a function trying to alter colours between two rows, vectorA has the odd rows, vectorB the even rows, so either vectorB will be the same length or one off.
try. vectorA ,. vectorB catch. vectorA ,. (vectorB, a:) NB. append an empty item end.
bool finished = false;
for (i = 0; i < size; i++) {
if (something(i))
finished = true;
...
if (finished) break;
}
Rather than
bool finished = false;
for (i = 0; i < size && !finished; i++) {
if (something(i))
finished = true;
...
}
Languages support a full conditional in loop clauses for a reason.
Some prefixing of variables is just aggravating:
- temp- In the end, all variables are temporary
- scr- (for scratch, meaning temp)
- my- (OK in example code, but not in implementation)
Any form of cargo culting (programming, general computer usage, etc.).
“Hmm, the result still has “>” thingies in it. I guess I have to add yet another decoding pass.”
Take the time to understand the involved specifications and standards to find out whether this is a bug in the content the program is consuming, a bug in the standard, a bug in the specification, or a bug in some other part of the system. If the data is supposed to be free of character entity references at this point, then there is a bug somewhere (unless the bug is in the specification and it really is OK to have character entity references at this point!). Find the bug to understand the problem.
“Hmm, that did not work. sudo worked on this other thing I was doing, I guess I will try it here, too.”
The solution to every permission problem is not “do it as root”.
Do not “paper over” the immediate problem with the first thing that comes to mind. Solutions from one problem should not be automatically applied to any other problems unless the solution will solve the problem in all applicable situations.
Getting a solution from “the Internet” is fine, but do not blindly apply it. Read the documentation, read the code, do some research, experiment in temporary environments. Learn from the given solution. Do not parrot it. Only with a proper understanding of a particular solution can one determine whether it is a proper solution for a specific problem.
This could would be so much better if I could just re-build it from the ground up.
No, it wouldn't.
BY FAR my biggest annoyance is: Leaving Code Commented out everywhere!
Not improving code that was fixed, just leaving it alone
- Not using the framework, for instance, writing their own code to log something instead of using the common logging framework the rest of the team uses.
- Printing to the Console, and not removing it..cluttering it up for other developers...
Jumbo Methods
I can't stand "jumbo methods," often characterized by:
- no extracted helper methods (but you might get block comments if you're lucky)
- dozens of variables in scope (rather than limiting their scope by extracting methods)
- no clear separation of tiers (e.g. intermixed validation, persistence, and presentation logic)
- control-flow hell
e.g.
void DoIt() {
// if the view is valid
if (TextBox1.Text != string.Empty && ...) {
var sum = 0;
// process each element
for (var i = 0; ... ) {
// make sure it's a good element
if ( ... ) {
Status.Text = "Bad Element";
break;
}
// process each subelement
for (var j = 0; ... ) {
// add to the sum
sum += ...
}
}
// remember the sum
ViewState["sum"] = sum;
}
// if the view is not valid
else Status.Text = "Required field missing";
}
jQuery != javascript
I see a lot of questions around here saying, How do I do X in jQuery, even when the OP has no idea whether jQuery is relevant.
This has elements of other answers: fanboyism, and using frameworks that you don't understand.
There are some great answers at the top! This really is just a personal pet peeve.
Indifference towards retarded compilation/build/project-organization methods that create unbelievable amount of mundane work.
I kid you not, I've seen a dev environment where checking out the latest revision requires 10 (gui) steps, and so does building. Running the full test suite might well require 100+ (gui) steps.
My pet peeve is my ISP. Especially when the support says: "Turn off your modem, wait 15 seconds and turn in again. Is it okay now?"
Programmers writing "helper" code (eg: buggy date validation that doesn't use regex to parse and breaks on certain dates since they were unable to understand testing works by not merely running it once on their machine) that can easily be found in the default language API or some commonly used library such as Apache Commons.
Programmers who decide to deviate from a standard just to make it work. This lack of concern also means they won't share this information until after they have commited their several revisions.
Programmers are people and as such are expected to function as a socially responsible member of society or company.
It continues to amaze me how many people believe that:
All programming languages are fundamentally the same; they just use different syntax.
Datastructures..... people don't know what it is first... but go on with programming.
Thinking that their code is the reason the business exists, not the other way around.
Pointless Tables and ASP.Net controls
Example: clean, functional code
<div>
I is in your browser
</div>
<div>
Showing your static text
</div>
Horrible beyond belief code
<asp:Table ID="table1" runat="server">
<asp:TableRow ID="row1" runat="server">
<asp:TableData ID="data1" runat="server" Width=191px>
<asp:Panel runat="server" ID="pnl1">
<asp:Label runat="server" ID=lbl1" Text="I is in your browser" />
</asp:Panel>
</asp:TableData>
</asp:TableRow>
<asp:TableRow ID="row2" runat="server">
<asp:TableData ID="data2" runat="server" Width=191px>
<asp:Panel runat="server" ID="pnl2">
<asp:Label runat="server" ID=lbl2" Text="Showing your static text" />
</asp:Panel>
</asp:TableData>
</asp:TableRow>
</asp:Table>
Using Java.
You can't imagine how much I hate software written in Java. Clumsy looking GUIs with tons of display bugs, the resource hogging "java automatic updater"...
Second worst pet peeve:
Using anything else than C# / F#. There is no justification, except in extreme cases (OS development or when you need to talk to the CPU directly to use SIMD). And then - only write as little as possible in unmanaged horrorlanguages from hell (C++, etc... choke) or, even worse, "dynamically typed languages" which are nothing but toys for masochists who hate understanding code and helpful tools like Intellisense.
Third worst pet peeve:
Unneccessary P/Invoke...
Mine is technical and is about text encoding. There are a LOT of programmers out there that just ignore this thing or don't even know what it is. And in an era where isolated apps are disappearing in favor of APIs, integration, messaging etc, every developer should have knowledge of text encoding, how it works and what are the libraries/functions their favorite programming language offers to deal with it.
"Programmers" that don't RTFM, those that want you to chew their food for them. They just come to forums or SO and ask questions that even google can answer!
Unnecessary comments! It drives me crazy when I see comments like:
if
{
}//end of if
or
x + y // add x to y
or
Catch (Exception e) {
//exception logic
}
Fonts that don't distinguish between zero (0)
and the letter 'oh' (o/O)
or between lowercase 'el' (l)
and the number one (1)
. When done right it makes it drop dead easy to tell the difference. When done wrong, just hope guessing wrong doesn't have bad consequences. CAPTCHA's anyone?
What I call the "King Nebuchadnezzar attitude":
Programmers posting on web site asking for help, saying "This is what I did and it didn't work" without saying in what way it didn't work. Compiler error? Runtime error? Wrong output? Electric shock? Somehow they determined that it didn't work, but they're not going to share that critical information with the people they're asking to help them.
You are supposed to figure out what they mean by "it didn't work", as well as why it didn't work.
Non-programmers are even worse... but that's off-topic.
Listening to ASP.Net (VB) developers with several years experience comment that they cannot work with a particular application because it is written in ASP.Net with C# and they "don't know how it works".
Redundant naming.
For instance:
class bookType
{
public:
void setBookTitle(string booktitle);
private:
string bookTitle;
}
Let's play a game of how many times you will commonly repeat the word book
now. Believe it or not this is a snipplet of example code from my college C++ book. It annoys me to no end.
Common usage:
bookType myBook;
myBook.setBookTitle("My book title");
I would have done:
class bookType
{
public:
void setTitle(string title);
private:
string Title;
}
Especially because it makes the structure a bit brittle. For instance if you decided to have a mapType
inherit from this bookType
. It'd look very odd having
myMap.setBookTitle("Map of Asia");
Needlessly Nested If statements
if(something == true)
{
if(somethingElse == true)
{
if(somethingElseOther == true)
{
// Logic
}
else
{
return false;
}
}
else
{
return false;
}
}
else
{
return false;
}
Instead of doing something like:
if(!something) return false;
if(!somethingElse) return false;
if(!somethingElseOther) return false;
// Logic
Or like this
return something && somethingElse && somethingElseOther;
Programmers that think they don't need to consistently indent code and make it as readable as possible for the next developer.
The most recent answer made me think of:
- Programmers who don't realize that a Tab does not necessarily equal four spaces (or eight, or two)
(Or, to remove the double-negation: programmers who assume that a tab equals four spaces)