views:

1510

answers:

13

It's a question you've probably asked or been asked several times. What's so great about Mainframes? The answer you've probably been given is "they are fast" "normal computers can't process as many 'transactions' per second as they do". Jeese, I mean it's not like Google is running a bunch of Mainframes and look how many transactions/sec they do!

The question here really is "why?". When I ask this question to the mainframe devs I know, they can't answer, they simply restate "It's fast". With the advent of Cloud Computing, I can't imagine mainframes being able to compete both cost-wise and mindshare-wise (aren't all the Cobol devs going to retire at some point, or will offshore just pickup the slack?).

And yet, I know a few companies that still pump out net-new Cobol/Mainframe apps, even for things we could do easily in say .NET and Java.

Anyone have a real good answer as to why "The Mainframe is faster", or can point me to some good articles relating to the topic?

EDIT: Thanks to everyone who responded there were some really great answers and of course some glib ones :) I chose Cylon Cat's answer, although I wish I could choose multiple... I used this post as a basis for Episode 7 of our podcast "Why are Mainframes still around -or- Mainframes vs. Cloud" If you enjoyed this question, head over and check out the cast: http://basementcoders.com/?p=485 I think we deduced that for the most part it's the "if it ain't broke don't fix it" argument wrapped up with expense associated with rewrites, but there is also a human factor...

+28  A: 

Because will you rewrite millions of line of code that hasn't been touched in decades and it still works? I won't

SQLMenace
This is correct. Say it costs 1 million a year to run a mainframe with 20 year old code, or spend 5 million dollars to rewrite it to run on commodity hardware (say 50k a year)... Not always a huge priority if the old code works.
bwawok
There are ports of COBOL and such for PC's.
Steven Sudit
@Steven Sudit: That only helps if you have access to the sourcecode. Often, the sourcecode has been lost, or it is a proprietary system from a company that went out of business 20 years ago. Also, having a specification-compliant COBOL 2000 compiler for a PC doesn't guarantee that your code written for some proprietary COBOL compiler from the '70s compiles cleanly. And what would you import your data into which is still stored in some pre-SQL non-relational hierarchical database?
Jörg W Mittag
@Jörg: Without source code, we'd have to consider an emulator. As for the data, I once worked for IBI, whose products can read all sorts of ancient file formats. Maybe it's better to bite the bullet by porting forward the data and writing code to manipulate it in a way that reflects *current* business needs.
Steven Sudit
@Steven then how do you explain net-new mainframe apps being built? To truely rewrite something you don't need the source code, you need good analysis and requirements. So, sure until it becomes to costly, keep the old Mainframe stuff around, but create new systems in the Mainframe world? Then I highly question one's motives...
ThaDon
Among other factors, interoperability with older applications causes new mainframe apps to get built. It's possible to build PC apps that work with mainframes, but it ain't always easy or clean.
Mike Burton
@ThaDon: I'm not trying to explain why they *are* being built, only why they don't *need* to be. I suspect that Mike's answer has some merit, but it may simply be a matter of inertia. People who only know how to use flat-head screwdrivers are not likely to order devices that use Philips or Robertson screws.
Steven Sudit
@ThaDon - I agree 1000% with "To truely rewrite something you don't need the source code, you need good analysis and requirements". For us, the worst thing that we could do would be to convert the current source, because the current source and it's limitations are a huge part of the problem. Why convert an old language and it's constructs to run on a new platform? If the idea of migration is to take advantage of new technology and methodology, you would want to take advantage of the advances in programming languages as well. No sense in just using an OO language to reproduce procedural code.
jaywon
@jaywon +1 for that, yeah these "source code converters" are bullox. What it ends up spitting out is essentially COBOL written in .NET or Java, that is: a bunch of IF statements with no regard for modern programming techniques or frameworks
ThaDon
Modern Mainframes ARE emulators of older mainframes...
Thorbjørn Ravn Andersen
+10  A: 

It's a question of scalability, and scale-up (faster, more powerful mainframes) vs. scale-out (more nodes in cloud).

Some task are not easily divisible into smaller subtask that can be run in parallel, so they are not apt for scale-out. What applies here is the Amdahl's law

vartec
I've heard that it is hardware support for I/O scalability (its usually data dependencies which prevent scale-out) that makes the value proposition for mainframes.
Justin
Yeap, "scale-out" is not that universal. That's why expensive shared memory units like IBM p690 successfully share the market with cheap cluster solutions.
sharptooth
+2  A: 

Mainframes pack more computing power into the same space. As bus speeds get faster and faster, the literal physical distance between components starts to become a larger and larger factor of the remaining delay times, thus being space-efficient starts to be the best way to improve speeds. They also generally operate with much wider buses, so it's easier to push more data around the system, faster.

Amber
I agree with you on this, and hey you *can* run Java on mainframes, I'll bet you people do. However, the way to take advantage of the mainframe still seems to be running CICS and COBOL.
ThaDon
I doubt any new mainframes are being used to make new software that uses CICS/COBOL, ThaDon. I could be wrong on this, mind. I haven't touched big iron in a coon's age. Even then, though, there was a shift away from CICS and COBOL.
JUST MY correct OPINION
+7  A: 

Mainframes are still around because mainframe software is still around, and mainframe software tends to have very high service level agreements surrounding it, which means it's both difficult and expensive to get right. Easier to keep what's there, even if it means systems are generations out of date. We're seeing breaking points for some of those systems now, but it has been a long time comin.

Mike Burton
+1  A: 

Mainframes are faster because they shuffle data from disk to memory much much faster than PC's (when you think about it, PC's disk structures based on platters with read/write heads and hop about all over the place to accumulate the bits of 1 and 0's to make up a file), the data transfer bus is much more phenomenally faster and much wider than PCs and workstations. This would explain why Mainframes are used in Banking and HealthCare sectors, and also, operators want instantaneous results of the queries which are pulled out of disk storage faster than a blink of the eye. That is the major thing about the data transfer buses, fatter and wider!

Since Mainframe systems are around much longer, and therefore written in COBOL and RPG, they also have a very high degree of scalability, in terms of processors. Ever see the innards of an AS/400 or MVS/390? Awesome to look at!

Also, Mainframes have a very high security in terms of lifespan, companies do NOT want to throw them away as they are the crucial life line to the business and easier to implement changes in terms of business logic (that is why it is still running COBOL) but it's very costly, the trade off is reliability versus cost here.

tommieb75
Not really true of a well-designed cluster...
Chinmay Kanchi
oversimplification and not true. Think about SSD and PCs with 128 GB of ram (very easy to do for way less $$ than mainframe)
bwawok
Thanks for the downvote when I was in the middle of the edit... who downvoted and did that person leave a reason...
tommieb75
@bawok: What you're talking about is unreliable - sorry, SSD's have a very short mean time between failures and are costly, once the cost of mass production of SSD's go down, capacities increase then maybe...but the OP was talking about in the context of 'in the now'. What I have written is quite true, the mainframes input output buses and data transfers gobbles up disk and load it to memory far much faster than a PC - doesn't matter if it's an i7 or Xeon core, forget it...Mainframes are the kings....I don't know why this was downvoted...
tommieb75
Because you shouldn't have to write to disk in most cases. Go do a google search for some obscure phrase. I bet it bounces between 10 different google servers before returning a result, and I bet none of them so much as touch a hard drive (maybe to write history of your search after it passes through or something). Very few things that really need to touch a disk drive anymore, so that is not really a valid benefit of mainframes in 2010
bwawok
Comments about banking wanting results fast... well, most banking companies run their databases on Oracle which does NOT run on mainframes. So clearly something is wrong with your logic...
bwawok
@bwaok: Sure. A lot of COBOL codes use file input/output, structured in a way that COBOL code can access the records in the files...some do use databases, but the experience I have with JCL is that data files are actually specified...
tommieb75
JUST MY correct OPINION
@JUST MY correct OPINION: Yeah, it is sad to see some people especially those who have *never* seen a Mainframe, let alone *work* with them try to dismiss others who have experience in that field, they are the ones that claim to be the *know it all* types ...
tommieb75
+20  A: 

I haven't been a mainframer for a long time, but I'll give this a try.

For most of the lifetime of computing, disk access has been much faster on mainframes, making them ideal for very large databases. This is not about raw speed, or capacity; it's about highly parallel operations at the I/O subsystem level. Until SANs, there wasn't anything that could keep up, but IBM certainly hasn't let the SAN networks pass them by, either.

Second, there's the cost of replacing applications and replicating databases. Many of these systems are written in assembler, especially at lower levels, dealing with transaction-processing software like CICS. And they may be using non-relational databases such as IMS. Migration is a huge hurdle.

Third, mainframes offer a significant advantage in centralizing systems management, as well as in centralizing applications that would otherwise overwhelm a lot of networks for interconnected small machines.

Mainframes never really became obsolete. They simply moved from center stage to a niche market, albeit a niche market that centers on the Fortune 500 companies that need huge systems to run their businesses.

Cylon Cat
Database throughput is still a big issue even with clustering and cloud computing. I can attest that postgres and mysql both have scaling issues in this area; and are *not* necessarily as cost effective as a mainframe.
Fred Haslam
@Fred - that's because relational databases aren't scalable to a massive degree. If you want a truly scalable datastore then you need to look towards NoSql databases.
Keith Rousseau
I'd expect that the best relational databases for scalability are MS SQL Server, Oracle, and IBM's DB2 (not in any particular order). As far as I know, those are the only vendors that have invested heavily in relational scalability. However, for mainframes, keep in mind that IMS is also not relational; IIRC, it's hierarchical.
Cylon Cat
And some of the mainframe source code was lost (or is only printed) in a bunch of company so it become risky to migrate to something else
RC
+8  A: 

There are mainframes sysplexes running that haven't been switched off or rebooted since before you were born.

gbn
Then they are severely behind on PTF's...
Thorbjørn Ravn Andersen
There are *sysplexes* that haven't experienced unplanned downtime since before you were born.
Nighthawk
@Nighthawk: thanks, corrected
gbn
+30  A: 

I personally work on a mainframe every day, so I may not be saying anything that hasn't already been said, but it is my own personal experience.

The system and codebase we use has been developed and maintained for over 20 years. It is a BIG system, with hundreds of thousands, if not millions, of lines of code, and the database is a flatfile proprietary database. Anyone who tells me there are conversion programs for what we have has no idea what they're talking about. There is just too many variables, and it just doesn't convert to say a webserver farm and SQL clustering. Sorry.

Having said that, the main reason we STILL use it, is because we don't have the time/money/manpower to get off of it. We did a project to web enable the front end to get off the terminal based client interface a few years ago, which was really only dealing with the client side of things, and that project(which was rather successful), took about a year and a half.

I don't particularly enjoy the technology used and I definitely don't enjoy maintaining 20 year old procedural code, but I do understand WHY we do it.

EDIT: Also as one final note on why we stay on it, and the most important from a business standpoint is that it WORKS, and it works well. We process tens of thousands of transactions per day with no processing problems whatsoever, and don't even come close to maxing out the CPUs.

EDIT 2: Just for fun, here is a sample of what a conversation with the CIO would be like for trying to push for a migration project:

Manager: We think it would be a good idea to upgrade our hardware/software to something a little more modern.

CIO: Why? What's wrong with what we have now?

Manager: There are a lot of modern platforms and languages that we could take advantage of to meet our current needs and we kind of feel like we're getting left behind.

CIO: Are we not able to meet business requirements with the current system?

Manger: No, although common tasks take a bit longer to complete than they would in a newer language/platform.

CIO: Would switching to a new platform bring us any performance benefits?

Manager: No.

CIO: What kind of timeframe would we be looking at?

Manager: With our current staff, we would be looking at minimum 1.5 to 2 years for requirements gathering, development, and QA. That's not including current development/maintenance of existing projects.

CIO: Would we need to hire any new people for development or administration?

Manager: If we did the conversion we may need a few seasoned programmers that could get up to speed quickly, as well as additional QA help for the scope of things to be tested. We would probably also need to look at our facilities to see if they are capable of housing many servers versus the one mainframe that we have now. We would also need someone who could administer many web and database servers once development was done.

CIO: Hmmm. I don't know if that sounds feasible. Is there anything else?

Manager: Well we're afraid soon all the COBOL programmers are going to die off and we won't have anyone left to develop on the current system.

CIO: Get out.

jaywon
Man, I wish I could +100 or donate rep points directly. You won the thread.
JUST MY correct OPINION
I agree with this, however with the advent of SOA we can now talk to you guys in a "reasonable" way. So we can send you information and you can respond. I understand the need to keep a Mainframe around for existing code, I just don't see the point in developing new code for it.I know you make light of "COBOL programmers dieing off" but seriously, the people who have been maintaining the code for 30+ years are going to retire, you better be training the new generation *now* or all those "millions of lines of code" are going to be as worthless as the punch cards they were written on :)
ThaDon
ANNND! If you are taking the time to train the new devs, why not document all the requirements and business processes of the system such that you can rewrite the thing at some point? The painful part of the rewrite process is not the code, IMHO, it's knowing what the heck the thing does. That's usually locked up inside the guy/gal who's pushing 60yrs old and ready to retire soon. Place I was at, core of the system has been running for 35yrs! And it needs to "rest" from 7pm to 5am *every day*. Their business hours extend that now that we are global economy... yeah, its a problem.
ThaDon
@ThaDon - You are right about starting to document what all this old code actually does. That has been a problem for us in the past. Entire modules of the system was written by people that are no longer around. Actually, much to my dismay sometimes, I am part of the "new" generation learning how the thing works. We have been in the process of documenting things as we go for a while now, and it is going well, but even that falls victim to time constraints/business demands sometimes, but we have come a long way in that regard. Fortuneately, we don't have a system that needs its beauty sleep :)
jaywon
+6  A: 

I'll add one reason not given in the above list.

Mainframes are greener.

Yes, you can equal the performance of a mainframe with mass-produced PC parts. You might even be able to get reliability up to mainframe levels that way with tricksy programming and clustering. Now measure the following:

  • How much space is your server farm taking to replicate that mainframe?
  • How much power are the machines in your server farm taking to replicate that mainframe?
  • How much cooling does your server farm require to replicate that mainframe?

Believe it or not, when doing energy and space costs over performance ratios, mainframes typically weigh in lower than PC clusters, and not by a small amount either. Those Google data centres everybody's talking about here? They're massive heat generators and power suction devices.

So why do the bright lads at Google do things that way? Well here we hit one of the main areas where the PC clusters will do better than the mainframes: parallel operations. When you have a lot of operations which are mostly-independent of each other to do, having a myriad of slower cores networked together generally does better than having a small number of high speed cores. On the other hand when you have a task that isn't easily made parallel, or if you have central control needs, smaller numbers of high speed cores kick ass on large numbers of slower cores every time.

TL;DR summary: Use the right tool for the job and your problems solve themselves. Mainframes are a tool. Clusters of PCs are a tool. Learn them both before deriding one or the other.

JUST MY correct OPINION
+7  A: 

To run Massively Multiplayer Online Games.

To shrink your data center.

And many other reasons.

The mainframe has a PR problem. Too many people think of mainframes as large room sized computers spinning magnetic tape reels. Todays mainframes are not your grandfather's mainframes.

Robert
+1 for "Todays mainframes are not your grandfather's mainframes." Very true, the hardware on our mainframe has been upgraded tremendously since it's conception. The machine is very modern, it's the codebase that is old and hard to replace. PR problem indeed, Maybe they should stick a cute piece of fruit on the side ;)
jaywon
Perhaps a blueberry?
Nighthawk
+5  A: 

I have to say that there is much misinformation, and much outright bigotry, in this thread.

Mainframes are around, and still actively developed for, because they accomplish work. You can't replace the mainframe without becoming the mainframe -- sure, you can use 10,000 beige boxes to simulate the capabilities of 10 high-end z/OS boxes, but by the time you do you have exceed the cost of the 10 mainframes. Not to mention the insane costs of your IT staff to track down and replace the dozen or so beige-box failures you will have in any given shift. For the volume of work they do, mainframes are very cost effective and easy to maintain.

Every time you trade a stock, swipe a credit/debit card, make a phone call, send a text message, check a balance, fly in a plane, make a hotel reservation...pretty much anything with a monetary component to it...your transaction will fly through a mainframe at some point.

Further, they make great web servers -- one mainframe can be sliced up into thousands of virtual Linux partitions -- some of you may have websites hosted on mainframes and not even know it. Or any number of CICS/TS tasks can simulate any number of independent web servers. They also serve as excellent database servers because of their excellent reliability and fault tolerance. The ability to process Billions (yes "B"illions) of transactions per day should not be ignored. You can get the same performance out of high-end PC hardware, but you need to add dozens of cores, multiple redundant disk, multiple redundant networks, by the time you are done, you could have had a mainframe for less money.

Mainframes today can and do run all the things that run on beige boxes -- Java/JEE, Ruby, C/C++, PHP, et al. (To get .NET you need to use a MONO port or Citrix or similar, but who cares, .NET's just another JVM)

I find it very interesting that people who haven't seen a mainframe since the mid-70s want to compare what they remember to the PCs of today...I'm sure they would find fault if I compared the 33,000 MIPs z/OS Sysplex I work on today with an IBM PC Junior from their yesteryear.

And yes, Cobol does exist and is actively developed in financial houses all over the world. It is easy to teach, easy to use, hard to mess up (compared to other languages/platforms/frameworks)

Joe Zitzelberger
And any idiot off the street that can understand VB or Java can be taught to use Cobol in a few days...why the worry about all the programmers dying off?
Joe Zitzelberger
@Joe depends on what you mean by "understand". I've seen a lot of Java code that looks it's been written by a COBOL programmer... Also, I have run WAS on zOS and you know what? Slow as cr@p. If you ask me Mainframes aren't being used as a primary function to run Java, its like it's an afterthought for corporations "Hey we can run Java on these things". I think the only time you'll see Billions (with a "B") transaction/sec is COBOL/ASM running on the mainframe.
ThaDon
+3  A: 

There's actually a large, global community of mainframe users. I am the managing editor for IBM Systems Magazine, Mainframe edition. We just released a video about the history of the mainframe, and explain why it's still relevant today!

www.ibmsystemsmag.com/mainframe/bigiron

I'd love to hear what you think.

Natalie
The video mostly features mainframers telling how good they think the mainframe is, and how well they think it handles business requirements and heavy workloads. That's really an ad rather than a historical review, as there's almost nothing at all in the video about the technology that makes mainframes tick. Sorry for my lacking skills in sugar coating...
John Reynolds
A: 

Is it about speed? Or is it about reliability. We are talking numbers here. Banks, Pharmaceuticals, and manufacturing are managing purely numbers. Or converting numbers that represent other numbers. Mainframes do a more solid job of handling this.

bob