How about, Unit-testing doubles development time
You need to know all of your requirements ahead of time because it's too expensive to change things later in development.
In reality, no one ever knows all of their requirements ahead of time and you can develop code in such a way as to mitigate the inevitable changes and new requirements. This might not be as much as truism as it used to be now that Agile development methods have gained currency.
Everything should be done in stored procedures
or inversely
Never use stored procedures
Documentation can be written after the software has been deployed. (We'll have time to do it then)
Our project is going to miss it's deadline!....quick lets throw more people onto the project! (ie Mythical Man Month)
Lines of Code is a good way to track productivity of your developers and overall project health.
Your user interface doesn't matter so long as the code works.
You don't need to worry about security until later on in the project.
There is one True way of programming that's suitable for everything, and any other way is always wrong. Mostly seen among OO or functional fanatics.
Performance-related falsisms:
- To find performance problems you have to run the code as fast as possible and time it every which way, guessing where the problems are based on how long things take or how many times they are invoked.
That is fine for monitoring program health, but pinpointing problems is not about measuring. It's about finding cycles that have poor reasons. This does not require running fast. It requires detailed insight into what the program is doing (typically via sampling as much of the program state as possible and understanding in detail why it's doing what it's doing at each sample time).
- To find performance problems you need a large number of samples so as to get high measurement precision.
Typical performance problems worth pursuing take from 10% to 90% of execution time. (That is how much execution time is reduced after you fix them.) The object is to find the problem, not to know precisely how big it is. Even a small number of random-time samples is virtually guaranteed to display the problem, assuming they are taken during the overall time span when the performance problem exists.
- Compiler optimization matters.
It only matters in code that 1) you actually compile (as opposed to libraries), 2) you actually spend much time in (as opposed to code that spends all its time calling functions, explicitly or implicitly).
Programmers at the same level are completely interchangeable
The one that irks me the most: Published "best practices" work for everyone.
Malarky.
Every company is different. The staff is different, the business model is different, the clients are different, the fiscal outlook is different, the culture is different, the politics are different, the technology is different, the long and short term goals are different, and on and on and on.
What works for one company will not necessarily work for another company. And I cannot repeat this enough: There is no silver bullet. Just because some guy (or some group of guys) wrote it in a book and slapped a fancy title on it does not make it irrefutable, beyond reproach, or an iron-clad guarantee that it will work in your situation.
You should carefully review any given "best practice" (or mediocre practice, for that matter) for its suitability for what you're doing, where you are, and where you're going before you even think about putting it in place.
Two words, folks: Risk analysis.
Computers are really clever and will solve any problem we encounter.
From what I've seen over the years, there appears to be two distinct groups of people: those who think computers are really clever and those who think computers are really dumb. Unfortunately, most people think the former is true when in fact computers are really dumb - they do exactly what we tell them do, even if that is to start a global themonuclear war.
Skizz
Use a simple editor or IDE and you will be productive at once.
Not spending your time learning hotkeys, regex-based editing and other power features of a professional tool may save you some days and will cost you hundreds of them.
"SQL in code is bad! Get the SQL out, and then we're good on data access." This simplistic thinking contains some truth but causes a lot of problems. Good data access strategy is sooooo important.
- Unless you know how and why data layers, sql functions, etc. can make things much better, just busting things out into procedures and functions can actually decrease the quality of your solution.
- Thinking simplistically that getting sql out of your code is what really matters keeps you from really thinking through your data access scheme.
- SQL in code is a bad smell. In an imperfect world though, you take short cuts, and this can be a legitimate place to cut corners. If you're not really going to separate your concerns properly, making 60 poorly named sql procedures and functions just makes life harder on the guy who has to come fix the mess a few years later. I know because I've been that guy several times.
Pair programming means double the development cost!
Pair programming. What researches say on the costs and benefits of the practice. would be a source to counter that.
Exponential-time algorithms are slower than polynomial-time algorithms.
In linear programming, the simplex algorithm is exponential, but it is typically much faster than its polynomial ellipsoid algorithm counterpart.
Based on a paper from 1978, people quote that maintenance is 20% corrective, 20% adaptive, and 60% perfective. These percentages came from a survey of managers' opinions, and no empirical evidence. In 2003, another group of researchers (Stephen R. Schach, Bo Jin, Liguo Yu, Gillian Z. Heller and Jeff Offutt) challenged this by studying maintenance data for Linux, RTP, and GCC, and found wildly different numbers. See their paper here: Determining the Distribution of Maintenance Categories: Survey versus Measurement.
Big-O Notation: O(1) < O(n)
We all make this mistake -- especially me :)
I can't find the post, but I remember reading a microcontroller blogger who described a case where his hardware needed to store some key/value pairs. Performance was critical and a hashtable with constant time lookup seemed to make sense; if I remember correctly, this setup performed quite well for years.
Out of curiosity, the programmer swapped the hashtable with an unsorted linked list, which easily beat the hash table for dictionaries < 20 items. Later, a sorted array and binary search, with O(lg n) lookup, absolutely demolished the hash table with items less than 500 key/value pairs, although slightly slower than a linked list for less than 10 items.
Since the original hardware never stored more than 15-30 keys at any given time, a sorted array replaced the hash table and our blogger becomes dev team hero for a day.
SQL Server specific: Stored procedures perform better than dynamic SQL because they're precompiled.
Don't know how many times I see this one, but its wrong.
See SQL Server 2000 documentation:
SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans.
See SQL Server 2005/2008 documentation:
When any SQL statement is executed in SQL Server 2005, the relational engine first looks through the procedure cache to verify that an existing execution plan for the same SQL statement exists. SQL Server 2005 reuses any existing plan it finds, saving the overhead of recompiling the SQL statement. If no existing execution plan exists, SQL Server 2005 generates a new execution plan for the query.
SQL Server creates an execution plan for all SQL statements on their first invocation, then caches the execution in memory for future use. Apart from edge cases where network latency slows down transmission of huge SQL strings over a network, there is no performance benefit gained by using stored procedures over dynamic SQL.
Microsoft IIS is insecure / Apache is secure
You hear this one a lot too, but the criticisms of MS/IIS security are about 10 years outdated. Compare vulnerabilities on Secunia:
Apache
- Apache 1.3.x: 22 advisories, 11 vulnerabilities, 1 unpatched (less critical)
- Apache 2.0.x: 41 advisories, 26 vulnerabilities, 4 unpatched (less critical)
- Apache 2.2.x: 17 advisories, 28 vulnerabilities, 2 unpatched (less critical)
Microsoft IIS
To look at it another way, there is a well known article from Mar 2008 which summarizes some findings by Netcraft and Zone-H. Although there are 1.66x as many Apache sites as IIS sites, Apache sites are defaced 2.32x as often, so the ratio of attacks to site is about 1.4. The Slashdot reaction to this article is worth reading.
Design your application from the ground up: start with the database model.
"Premature optimization is the root of all evil" Knuth
In print it is very often used without the context of the full quote.
Additionally, neither of the two people who are said to have created it, (Hoare is the other) do not claim to have created it.
I typically associate the above quote with laziness and excuses when I hear or read it.
The full quote (whatever the origin):
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
The difference (by the the added qualification) is huge.
The more design patterns you use the better.
Applying design patterns can make code better, and it's great to have a shared vocabulary for developers. However, many solutions don't require patterns, and knowledge of patterns is no substitute for understanding algorithms, data structures, and the fundamentals of problem solving.
From the premature-optimizations department:
Denormalize your schema up front because normalized schemas are too slow and full of joins to be usable in the Real World.
Number Of Bugs per Line Of Code measures Quality (yep, not so true or relevant in the practical world as we know it today)
Never, ever use a goto cause they're harmful.
This was originally cited as "true" because it was noticed that code with lots of gotos was poor in quality.
This is an example of attacking the misused tool (anyone for try/catch?) instead of the real problem, which is being unable to recognize and prevent unmaintainable, poor-quality code.
Reflection (in .net, not sure about Java) is very expensive and therefore extremely slow, hence it should be avoided at all costs.
Business Development Guy: "If I can write the spec, then anybody can write the spec...so anybody can build my product"
Static typing and strong typing are the same thing.
There are plenty of languages that are strongly and dynamically typed out there; Python is a particularly popular example.
Staying late and working overtime is the only way to make deadlines.
...sure, until you are so bloody exhausted you can barely see straight and the excessive caffeine leads to the shakes or a mental kernel panic.
... to heck with better planning/doing actual estimates/setting more realistic expectations.
We can defer this bug as long as we document it in the release notes.
A more recent one :
Don't bother with that, hardware is cheap, we'll buy more servers.
Yeah, hardware is cheap. But when you buy a server, you have to pay a price every month for hosting and/or electricity and/or bandwidth. And you add an extra cost to your maintenance too. You spend more time for migrations and deployments.
Yes, hardware is cheap to buy, but unless you are a cloud-computing-virtualisation-sys-admin hero, owning a new computer has significant cost.
Low-level languages (Assembler, C) produce faster code than high-level languages (C++, Java, OCaml). Often when you show people benchmarks that prove the opposite, they even think there's some kind of "trick" involved, because "nothing can be faster than C except assembly, right?"