views:

761

answers:

10

Sometimes it's difficult to describe some of the things that "us programmers" may think are simple to non-programmers and management types.

So...

How would you describe the difference between Managed Code (or Java Byte Code) and Unmanaged/Native Code to a Non-Programmer?

A: 

"The specific term managed code is particularly pervasive in the Microsoft world."

Since I work in MacOS and Linux world, it's not a term I use or encounter.

The Brad Abrams "What is Managed Code" blog post has a definition that say things like ".NET Framework Common Language Runtime".

My point is this: it may not be appropriate to explain it the terms at all. If it's a bug, hack or work-around, it's not very important. Certainly not important enough to work up a sophisticated lay-persons description. It may vanish with the next release of some batch of MS products.

S.Lott
"Don't bother because it may go away"...I wish my boss accepted this as an answer.
EBGreen
@EBGreen: Look at MS product announcements. MS is always creating and dropping terminology to explain away some problem or other. When your company switches to Linux, you won't care anymore.
S.Lott
"Managed Code" is not a term that explains away a problem. It's a term they've given to the code that is generated from a compiler targeting the .NET Framework (similar to Java Byte Code).
Chris Pietschmann
yeah but the principle behind .Net managed code and java byte code are the same. It's terminology for a concept, the concept is what he is trying to explain not the name. A rose by any other name would smell just as sweet...
Omar Kooheji
Managed code is the Microsoft term for code run on a garbage collecting VM. It's not going to "go away", since it's generally agreed that Garbage Collecting environments are the single biggest software development productivity boost in the last several decades.
Adam N
LOL - "When your company switches to Linux, you won't care anymore" - TRUE, because then you'll have a whole new set of problems!
Steven A. Lowe
"Garbage Collecting environments are the single biggest software development productivity boost in the last several decades" In the Linux world, we don't much care. So, I'm not sure that "biggest" applies. Big, maybe. Perhaps "biggest for Windows only".
S.Lott
I need a "silver gun" to shoot myself in the foot with this great new "silver bullet"? Or my old gun is ok? Garbage Collection is a memory management technique and, as every implementation technique, need to be understood to be used correctly. Look how, at ITA Software, they bypass GC to obtain good perfomance: http://paulgraham.com/carl.html
MaD70
+3  A: 

think of your desk, if you clean it up regularly, there's space to sit what you're actually working on in front of you. if you don't clean it up, you run out of space.

That space is equivalent to computer resources like RAM, Hard Disk, etc.

Managed code allows the system automatically choose when and what to clean up. Unmanaged Code makes the process "manual" - in that the programmer needs to tell the system when and what to clean up.

Andrew Theken
This made me think of one of Prof. Epps' cutscene explanations on Numb3rs.
Bill the Lizard
+1  A: 
Joachim Sauer
+8  A: 

Managed Code == "Mansion House with an entire staff or Butlers, Maids, Cooks & Gardeners to keep the place nice"

Unmanaged Code == "Where I used to live in University"

Eoin Campbell
Nice analogy. :)
EBGreen
Thanks... I always hated having to Malloc() my bed and pointer the other guys towards the dirty dishes.
Eoin Campbell
So you're saying managed code is more expensive and you have less freedom with it? Seems accurate.
Edmund
+2  A: 

Perhaps compare it with investing in the stock market.

You can buy and sell shares yourself, trying to become an expert in what will give the best risk/reward - or you can invest in a fund which is managed by an "expert" who will do it for you - at the cost of you losing some control, and possibly some commission. (Admittedly I'm more of a fan of tracker funds, and the stock market "experts" haven't exactly done brilliant recently, but....)

Jon Skeet
+1  A: 

Here's my Answer:

Managed (.NET) or Byte Code (Java) will save you time and money.

Now let's compare the two:

Unmanaged or Native Code

You need to do your own resource (RAM / Memory) allocation and cleanup. If you forget something, you end up with what's called a "Memory Leak" that can crash the computer. A Memory Leak is a term for when an application starts using up (eating up) Ram/Memory but not letting it go so the computer can use if for other applications; eventually this causes the computer to crash.

In order to run your application on different Operating Systems (Mac OSX, Windows, etc.) you need to compile your code specifically for each Operating System, and possibly change alot of code that is Operating System specific so it works on each Operating System.

.NET Managed Code or Java Byte Code

All the resource (RAM / Memory) allocation and cleanup are done for you and the risk of creating "Memory Leaks" is reduced to a minimum. This allows more time to code features instead of spending it on resource management.

In order to run you application on different Operating Systems (Mac OSX, Windows, etc.) you just compile once, and it'll run on each as long as they support the given Framework you are app runs on top of (.NET Framework / Mono or Java).

In Short

Developing using the .NET Framework (Managed Code) or Java (Byte Code) make it overall cheaper to build an application that can target multiple operating systems with ease, and allow more time to be spend building rich features instead of the mundane tasks of memory/resource management.

Also, before anyone points out that the .NET Framework doesn't support multiple operating systems, I need to point out that technically Windows 98, WinXP 32-bit, WinXP 64-bit, WinVista 32-bit, WinVista 64-bit and Windows Server are all different Operating Systems, but the same .NET app will run on each. And, there is also the Mono Project that brings .NET to Linux and Mac OSX.

Chris Pietschmann
+1  A: 

Unmanaged code is a list of instructions for the computer to follow. Managed code is a list of tasks for the computer follow that the computer is free to interpret on its own on how to accomplish them.

Joel Lucsy
A: 

The big difference is memory management. With native code, you have to manage memory yourself. This can be difficult and is the cause of a lot of bugs and lot of development time spent tracking down those bugs. With managed code, you still have problems, but a lot less of them and they're easier to track down. This normally means less buggy software, and less development time.

There are other differences, but memory management is probably the biggest.

If they were still interested I might mention how a lot of exploits are from buffer overruns and that you don't get that with managed code, or that code reuse is now easy, or that we no longer have to deal with COM (if you're lucky anyway). I'd probably stay way from COM otherwise I'd launch into a tirade over how awful it is.

dan gibson
A: 

It's like the difference between playing pool with and without bumpers along the edges. Unless you and all the other players always make perfect shots, you need something to keep the balls on the table. (Ignore intentional ricochets...)

Or use soccer with walls instead of sidelines and endlines, or baseball without a backstop, or hockey without a net behind the goal, or NASCAR without barriers, or football without helmets ...)

le dorfier
+3  A: 

I'm astonished by what emerges from this discussion (well, not really but rhetorically). Let me add something, even if I'm late.

Virtual Machines (VMs) and Garbage Collection (GC) are decades old and two separate concepts. Garbage-collected native-code compiled languages exist, even these from decades (canonical example: ANSI Common Lisp; well, there is at least a compile-time garbage-collected declarative language, Mercury - but apparently the masses scream at Prolog-like languages).

Suddenly GCed byte-code based VMs are a panacea for all IT diseases. Sandboxing of existing binaries (other examples here, here and here)? Principle of least authority (POLA)/capabilities-based security? Slim binaries (or its modern variant SafeTSA)? Region inference? No, sir: Microsoft & Sun does not authorize us to even only think about such perversions. No, better rewrite our entire software stack for this wonderful(???) new(???) language§/API. As one of our hosts says, it's Fire and Motion all over again.

§ Don't be silly: I know that C# is not the only language that target .Net/Mono, it's an hyperbole.

Edit: it is particularly instructive to look at comments to this answer by S.Lott in the light of alternative techniques for memory management/safety/code mobility that I pointed out.

My point is that non technical people don't need to be bothered with technicalities at this level of detail.

On the other end, if they are impressed by Microsoft/Sun marketing it is necessary to explain them that they are being fooled - GCed byte-code based VMs are not this novelty as they claim, they don't solve magically every IT problem and alternatives to these implementation techniques exist (some are better).

Edit 2: Garbage Collection is a memory management technique and, as every implementation technique, need to be understood to be used correctly. Look how, at ITA Software, they bypass GC to obtain good perfomance:

4 - Because we have about 2 gigs of static data we need rapid access to, we use C++ code to memory-map huge files containing pointerless C structs (of flights, fares, etc), and then access these from Common Lisp using foreign data accesses. A struct field access compiles into two or three instructions, so there's not really any performance. penalty for accessing C rather than Lisp objects. By doing this, we keep the Lisp garbage collector from seeing the data (to Lisp, each pointer to a C object is just a fixnum, though we do often temporarily wrap these pointers in Lisp objects to improve debuggability). Our Lisp images are therefore only about 250 megs of "working" data structures and code.

...

9 - We can do 10 seconds of Lisp computation on a 800mhz box and cons less than 5k of data. This is because we pre-allocate all data structures we need and die on queries that exceed them. This may make many Lisp programmers cringe, but with a 250 meg image and real-time constraints, we can't afford to generate garbage. For example, rather than using cons, we use "cons!", which grabs cells from an array of 10,000,000 cells we've preallocated and which gets reset every query.

Edit 3: (to avoid misunderstanding) is GC better than fiddling directly with pointers? Most of the time, certainly, but there are alternatives to both. Is there a need to bother users with these details? I don't see any evidence that this is the case, besides dispelling some marketing hype when necessary.

MaD70
Of course they are not entirely novel concepts. Still, the wide use of this exact combination of technologies and the scope of the implementation (and the improvements in practical JIT/runtime optimization) is certainly something those runtimes have introduced as new concepts. Other than that your rant is hardly relevant to the question: Even if it's not new, it might still be necessary to describe to layman.
Joachim Sauer
You don't give even a single reason why this is necessary; you keep asserting it and pretend that if one don't follow s/he is "ranting".
MaD70