views:

1080

answers:

27

There is a strong case out there for companies who are experiencing scaling problems with their current architecture to simply spend $$$ on cutting edge hardware to achieve the performance and scale they require.

In most cases I have experienced and read about, the business case for software re-architecture never stacks up against the business case for more computing horsepower.

Simplified reasoning: Hardware is cheap, developers are expensive.

As a software developer, does this bother you, or does the does it appeal to your pragmatism?

+5  A: 

Well, I can tell out of experience that it is not impossible to reduce execution time with a factor 200 (or even bigger) by profiling and optimizing your code. I don't think these factors are easy obtained with more hardware.

Gamecat
But what if it doesn't need to go 200 times faster?
Paul
And why do you want to _reduce_ execution speed? ;)
Bombe
Well, we write calculation heavy software. So it matters if it executes in a few minutes or a few days ;-).
Gamecat
I think he meant reduce latency/execution time.
Tim Matthews
Right, but I think the OP's saying that's an uncommon scenario.
Paul
Sadly, a speed increase of 200x might say more about the original design than about your optimization techniques...
Roddy
Indeed. I once was given a database op which only looked at 15 records at a time (out of thousands) because it took half an hour to do even that much. Within a day I had it taking 14 seconds to do the whole db. Not because I'm an uber-optimizer, but because the original design was atrocious.
Dave Sherohman
+10  A: 

For me it makes my life easier, because I don't have to strive to optimize every inch of my code. I can write cleaner, more readable, albeit somewhat slower code. This in turn decreases development/maintenance time and thus - expenses. The savings can then be used to buy better hardware.

That said, it's not an excuse for writing sloppy code of course. I do think a lot about my data structures, DB architecture, etc. I try to make them as efficient as possible, without sacrificing elegance.

Vilx-
+1 - "The savings can then be used to buy better hardware."
Jenko
+6  A: 

IMO, problems of scale are rarely linear in nature, whereas throwing hardware at the problem typically provides a linear performance improvement. Look at performance graphs for most systems and you tend to see S-curves rather than straight lines. As such throwing hardware at the problem is a short term solution only.

Shane MacLaughlin
+2  A: 

Sadly, most versions Vn+1 of software are bigger and slower than version Vn.

Given this, would you - as management - really believe someone who said that their 'new' version would be faster than the old?

I know this is a simplification, and it is often possible to make radical performance improvements with relatively simple changes, but history tells us that's often not the case.

Roddy
+13  A: 

When it happens with other people's code, it's sensible and pragmatic. When it happens with my code, the code should have been optimized!

Greg
Very nice, this is a great answer
nick_alot
Strange, I have completely the opposite view ;-)
Valerion
+5  A: 

Pragmatic.

It's not about whether the ultimate speed up can come from hardware or software, it's about the best way to solve the business problem of capacity and performance, and if the hardware route delivers what's wanted for a lower cost, why not?

Paul
+2  A: 

Quite a few of the large companies who tend to do this always forget that it's not uncommon to have a yearly maintenance cost of $10,000 or more for a production server that is located in a mountain somewhere.

So you're scaling at least one webserver and one appserver, say over 5 years. At least $100,000 just for hosting the boxes. That's only the start of the cost analysis.

How much optimization could you do for that kind of money ?

krosenvold
probably not that much, if you intend on paying people their salary. !0k is pretty cheap
Greg Dean
Arguably you could hire someone to fix the problem.
krosenvold
$10,000 is still cheaper than most programmers :)
epochwolf
Assume that the software requires minor tweaks, about a week of work for a staff programmer. $10000 for a programmer makes a $480000 programmer. Quite an expensive programmer.
BubbaT
+12  A: 

Well the business case makes sense. Compared to open heart surgery on a software architecture, it's hard to argue with Moore's law (but software environments have added so many layers of abstraction that we're getting there :-)

Back in the day I was able to get a 30% speedup in an encryption implementation by looking at the assembler that the compiler produced and removing a single unnecessary instruction from an inner loop. Nowadays we are so many abstractions away from the raw metal that this sort of thing would be impossible to even contemplate.

edit: I think there is still a place for optimisation of the sort where you replace an naive algorithm of O(N^2) or O(N!) with one of O(N) - but that is almost in the category of bug fix rather than optimisation. And generally you may be able to get some useful gains with small local optimisations in well encapsulated code that is heavily used.

But I'm also reminded of the rules of optimisation:

Rule 1: Don't optimise

Rule 2 (for experts only): Optimise later

frankodwyer
And never optimize without data. Unless you've measured where that bottleneck could be, you're just guessing.
duffymo
+1  A: 

I must admit it doesn't bother me too much. If something can be solved cheaper with additional hardware then great. As long as value is added by a solution then it is ok even though it's not the ideal. While I would be tempted, as a developer, to look for a software solution sometimes there is little to be gained from it financially. It's like trying to squeeze all the performance out of a 286 by developing in assembler when you can just upgrade to a multi-core machine for cheaper.

mezoid
+3  A: 

If you want to think about the environment, it's much better to spend money on developers than hardware.

If existing hardware can be utilized better you get a smaller environmental footprint through not having to produce that hardware, ship it and then run it and cool it.

But it has to be said that newer hardware might also give more performace for the same footprint. Hardware upgrades can save both time and make good sense from an environmental angle. Such as upgrading RAM to do less disk activity, or use LCD over CRT's.

Use good science.

Guge
Developers produce more greenhouse gases than an extra server!
Sam Meldrum
Yes, but they produce less greenhouse gases coding then they do loose on the streets. In most societies developers are not a reversible enviranmental component. Potential servers, however, don't have to be manufactured.
Guge
@Sam, so you're suggesting we cull some devs to cut down on emissions. Or do we switch them into a hibernate mode when they're not coding ;)
Shane MacLaughlin
+1  A: 

Well...having what would once have been called a super-computer sitting on my desk now certainly makes things easier there is something to be said for squeezing out that little extra bit of performance. Like decoding HD video on an OMAP3, I love toying with little stuff like that and seeing just how much power I can squeeze out of a relatively under-powered system.

With energy concerns rising and singularity approaching, it would only make sense for the big companies to begin focusing more heavily on optimization.

It's nice to see Microsoft leading the pack with their Windows overhaul in version 7.

Stephen Belanger
+6  A: 

I've generally been at the other end, because the managers just didn't understand costs.

Manager: We need to optimize because our software runs too slow...
Me: Well we could upgrade from 1Gb ram to 2 or more...
Manager: We can't afford it.
Me: o_O

So yes I lean towards more hardware first. No I don't have a problem with that because generally my time is better spent adding more features. While I do enjoy performance programming, I can rarely justify the time it takes to do it.

Cameron MacFarland
+2  A: 

In production environments hardware isnt necceserally that cheap.

Also production quality envronments usually involve at least two machines at two sites with any single machine being able to handle all the workload.

So you pay more for better quality hardware, then, you multiply that cost by four. Having a skilled programmer iron out performance issues doesnt seem so expensive after all.

James Anderson
+5  A: 

Simplified reasoning: Hardware is cheap, developers are expensive.

Another way to look at it:

When optimizing code, the results are not very predictable: You could spend 10 days trying to optimize code, and 10 days later end up with no significant result... or with a huge performance improvement, depending on how careless the original developer was.

Upgrading hardware, on the other hand pretty much guarantees a result. If you get a bigger faster machine your existing code will definitely run faster, instantly.

Alterlife
somewhat true, but depends on what the bottleneck is
Seun Osewa
+1  A: 

Simplified reasoning: Hardware is cheap, developers are expensive.

Yes, because most developers you would entrust to optimize that code, would earn the cost of more than few processors and a bunch a ram a week.

Robert Gould
A: 

Hardware first, unless the hardware costs are significantly more than re-architecting the problem software. In addition, you have to know that there is a problem in the software that is slowing it down due to a poor choice of algorithm or data structure. In addition specific hardware improvements can improve performance for certain applications without a high cost - extra memory, faster hard drives, faster network connectivity.

I.e., it isn't worth optimising the software for a factor of 2 improvement. Maybe an order of magnitude improvement. Definitely a reduction on complexity in core algorithms (O(N^2) to O(log N) for example).

Now if the software is from an outside vendor and performing slowly, or the outside vendor is performing slowly themselves in any regard (support, upgrades, keeping the annual subscription fee fair), then you might consider reimplementing in-house if you have the resources. Especially for CRITICAL business functions, where the supplier is holding you hostage. If the supplier is being fair and good, then the opposite applies.

JeeBee
+1  A: 

Reducing application processing time is relatively easy. Profile, optimize, test. Rinse and repeat.

Getting an application to scale is much harder. The flaws that prevent an application from scaling are usually design flaws, not coding issues. By the time you realize you're in trouble, it's too late.

Throwing hardware at it may help, but will only go so far (see Amdahl's Law). I see this more as a stop-gap "band aid" than a permanent solution.

There are no shortcuts or cheap fixes. Adding hardware, if it helps your application performance, adds costs to other areas. Capital expenditures, more maintenance, more potential points of failure, more operational costs, more licensing costs...

If performance and scalability matter that much, you really need to do design reviews, modeling and simulations early in the development cycle if you want a reasonable chance of meeting your goals on time and on budget.

Patrick Cuff
A: 

I would always always first recommend running your application through a code profiler. Get rid of the major bottlenecks first.

Once you get to a point of diminishing return, where you're not getting orders or magnitudes to multiples in performance enhancements, then buy more hardware. Make it worth it.

Stephane Grenier
+2  A: 

I'm an infrastructure scaling engineer, not a programmer, and although spending your way out of a bottleneck is never a long-term solution it is often a quicker and cheaper way to deal with immediate performance issues than coding your way out of the same problem.

One reason for this is the decrease in cost of hardware combined with the constant increase in performance/£$ plus the fact that coding costs only ever rise (let's ignore offshoring for the moment). Another reason is the risk to deployment fix timescales; generally you can find out how quickly you can buy performance-increasing hardware, have it delivered, installed and commissioned. This is far from the case with coding, it's very often a total mystery at the start of a performance problem how long to code your way out of it.

Sometimes it's just not worth the time and hassle to fix a problem with a compiler than it is with the latest memory/disk/processor etc. That's why not every soldier is a sniper, cannon-fodder is cheaper.

Chopper3
+1  A: 

I've no problem with either, but usually hardware is cheaper. And - more importantly - quicker to do.

Example: We had a severe issue with a Java application crashing frequently (about every 10 mins) due to running out of memory. It runs constantly to process data - having it restart is a major hassle. None of us here know Java to any useful degree. We could have spent days hunting through the nasty multi-threaded code to find where and why the leak was occuring. Or we could throw in a couple more gigs of RAM the next day and schedule the thing to restart once per day when it's idle. We went for the RAM...

Valerion
+1  A: 

Patrick touched on this; it's very often not a simple choice between hardware or software improvements.

To be able to make use of more hardware, the software needs to have been written (or be re-written) in such a way as to allow horizontal scaling.

Strict vertical scaling can't take you very far at all. For example, simple single threaded applications don't benefit from more cores or CPUs, and more memory and better I/O is only going to take you so far. Throwing machines at software like this will leave you out of pocket, and with lots of very idle machines.

For that reason, if there is a structural or architectural reason that hardware improvements won't help a situation, the only logical choice is to remove those software roadblocks (not necessarily directly improving performance - perhaps hurting it!), then deploy more hardware to actually address the scaling problem.

There are examples of single threaded applications that are highly performant and scalable (e.g. HAProxy and CICS, until a few years ago). I'd say these are special cases, as there are many more examples of badly architected systems that did not scale or perform due to the restrictions implicitly defined in software (e.g. Twitter and Yahoo! Search, pre Hadoop)

Alabaster Codify
A: 

The problem where I work is that code is rarely touched again once it goes into production. Hardware is often the only way to scale it. Sad, but true.

duffymo
+2  A: 

I dislike the idea of throwing hardware at a problem for anything other than a temporary and immediate way to address a crippling performance issue. The hardest part of the argument is conveying the value of correcting the underlying issues. What is really at play is a scalability issue - which are classically difficult to justify to business stakeholders.

People always think of having to be able to scale up an application, which is what you are doing when you throw hardware at a problem. This form of horizontal scaling has its place, but people neglect to think that there are business cases where applications need to scale down, as well. If your installed base drops and you need to reduce costs, if you have followed the approach of throwing hardware at a problem, there is a very realistic chance that you will not be able to adequately reduce the operational overhead to a point that is required. This is a very real scenario that is especially prevalent with downturns in the economy and service-based web sites.

Chances are that you will not be able to convince the business of the needs to improve the code immediately. This is a lesson that often needs to be learned the hard way, as it is counter for a company to think of contraction over growth. My recommendation, from experience, is to isolate the issues that are causing the problems, and reinvent the architecture around them as a refactoring excercise. For example, if you have a caching issue, start to write a caching framework over several iterations. Once it is completed and tested, start to migrate problematic code to it. If you have written adequate unit tests and have sufficient test coverage, the change will be safe - and if you build up the new architecture in very minimal iterations along with your normal workflow, you can greatly improve your product.

This is considered a "brownfield" approach, whereas a "greenfield" approach is starting from scratch with the intent of implementing a failed or outdated architecture correctly. I personally look at brownfield as a survivability means that could lead to a greenfield in the event that it is beyond fixing.

Additionally, I feel that working in an environment where hardware is more of a commodity than a sound product is dangerous. On a development level, it fosters laziness and instability through risky programming practices. On a business level, in the event the company goes on the market, it can actually hurt the value of the company. Large companies in this scenario often have to let perspective buyers look at the architecture and the underlying code. Depending on the importance of the application (if it is what the company is) it can alter the valuation of the company, or even deter perspective buyers and investors.

joseph.ferris
I want to work somewhere with risque programming practices... Sadly, though, I suspect you meant "risky".
Dave Sherohman
You apparently don't have "Clothing Optional Mondays"! ;-)
joseph.ferris
the points about contraction and due diligence are both excellant
kloucks
+1  A: 

Does not apply in my case. We ship our software to a huge number of customers, and buying them hardware is not an option.

Nemanja Trifunovic
+1  A: 

Throwing hardware at a problem only works as a linear solution until you hit the Power/HVAC/BTU limits of your current data center. Then you're talking real money cause you have to find, setup and migrate your server farm to a new location (not to mention reconfiguring your disaster recovery data center) and that's always measured in millions of dollars.

Draw too much power and risk utilization triggered brownouts in your server farm - takes weeks to figure out what's happening unless you're profiling your power draw. Exceed your HVAC BTU rating and wait for a hot July day to take out the HVAC system amd potentially render half your servers permanently inoperable.

I've seen the above happen, more than once - then suddenly management sees the developer solution as a cheaper solution.

kloucks
+1  A: 

Increasing hardware is fine, if your employer controls the hardware, but what if you are selling your software to someone else. Clients generally don't like to hear that they have to get new hardware, and are happy to switch to developers who don't require it of them.

Look at Vista ( and I suspect the new Windows will flop for similar reasons ).

BubbaT
Thi is nice take from a different perspective. It never even crossed my mind until this post.
nick_alot
A: 

You have to go outside to upgrade your hardware. Usually you don't want to leave your house unless you have to.

toto