views:

253

answers:

8

It's been a couple of decades since I've done any programming. As a matter of fact the last time I programmed was in an MS-DOS environment before Windows came out. I've had this programming idea that I have wanted to try for a few years now and I thought I would give it a try. The amount of calculations are enormous. Consequently I want to run it in the fastest environment I can available to a general hobby programmer.

I'll be using a 64 bit machine. Currently it is running Windows 7. Years ago a program ran much slower in the windows environment then then in MS-DOS mode. My personal programming experience has been in Fortran, Pascal, Basic, and machine language for the 6800 Motorola series processors. I'm basically willing to try anything. I've fooled around with Ubuntu also. No objections to learning new. Just want to take advantage of speed. I'd prefer to spend no money on this project. So I'm looking for a free or very close to free compiler. I've downloaded Microsoft Visual Studio C++ Express. But I've got a feeling that the completed compiled code will have to be run in the Windows environment. Which I'm sure slows the processing speed considerably.

So I'm looking for ideas or pointers to what is available.

Thank you,

Have a Great Day! Jim

+1  A: 

What are you trying to do? I believe all the stuff should be compiled in 64bit mode by default. Computers have gotten a lot faster. Speed should not be a problem for the most part.

Side note: As for computation intense stuff you may want to look into OpenCL or CUDA. OpenCL and CUDA take advantage of the GPU which can transfer lots of information at a time compared to the CPU.

thyrgle
+1 and if SIMD is not the solution, the OP may want to use concurrency, which again would be a pain to support without a proper OS. Regarding "64bit by default", though, the application I work on is 66% slower in 64-bit mode despite the improved instruction set because it is completely bound by memory bandwidth (and manipulates almost exclusively pointers and some native-size integers).
Pascal Cuoq
+7  A: 

Tthere is no substantial performance penalty to operating under Windows and a large quantity of extremely high performance applications do so. With new compiler advances and new optimization techniques, Windows is no longer the up-and-coming, new, poorly optimized technology it was twenty years ago.

The simple fact is that if you haven't programmed for 20 years, then you won't have any realistic performance picture at all. You should make like most people- start with an easy to learn but not very fast programming language like C#, create the program, then prove that it runs too slowly, then make several optimization passes with tools such as profilers, then you may decide that the language is too slow. If you haven't written a line of code in two decades, the overwhelming probability is that any program that you write will be slow because you're a novice programmer from modern perspectives, not because of your choice of language or environment. Creating very high performance applications requires a detailed understanding of the target platform as well as the language of choice, AND the operations of the program.

I'd definitely recommend Visual C++. The Express Edition is free and Visual Studio 2010 can produce some unreasonably fast code. Windows is not a slow platform - even if you handwrote your own OS, it'd probably be slower, and even if you produced one that was faster, the performance gain would be negligible unless your program takes days or weeks to complete a single execution.

DeadMG
I'm not happy with the first paragraph. Why would there be "substantial performance" benefits to writing your own OS? Even if he *did* write his own OS, that wouldn't magically unlock enough performance to be noticeable.
jalf
@jalf: The point was that you would have to go to that length to get something noticably faster than Windows. I mean, it's not worded very well. I'll edit it.
DeadMG
+7  A: 

Speed generally comes with the price of either portability or complexity.

If your programming idea involves lots of computation, then if you're using Intel CPU, you might want to use Intel's compiler, which might benefit from some hidden processor features that might make your program faster. Otherwise, if portability is your goal, then use GCC (GNU Compiler Collection), which can cross-compile well optimized executable to practically any platform available on earth. If your computation can be parallelizable, then you might want to look at SIMD (Single Input Multiple Data) and GPGPU/CUDA/OpenCL (using graphic card for computation) techniques.

However, I'd recommend you should just try your idea in the simpler languages first, e.g. Python, Java, C#, Basic; and see if the speed is good enough. Since you've never programmed for decades now, it's likely your perception of what was an enormous computation is currently miniscule due to the increased processor speed and RAM. Nowadays, there is not much noticeable difference in running in GUI environment and command line environment.

Lie Ryan
+1 for recommending the Intel ICC compiler (which is available for Windows, Linux and Mac OS X, BTW).
Paul R
@Paul R Of course, you have to prove yourself worthy of ICC by locating it on the corporate website choke-full of dead links to itself. If it **not** linked from there, for instance... http://software.intel.com/sites/products/collateral/hpc/compilers/fmac_brief.pdf
Pascal Cuoq
Ok, it seems that the links on http://software.intel.com/en-us/articles/intel-software-evaluation-center/ are working (at least until the end of the week).
Pascal Cuoq
Intel compiler isn't free.
ruslik
A: 

If your last points of reference are M68K and PCs running DOS then I'd suggest that you start with C/C++ on a modern processor and OS. If you run into performance problems and can prove that they are caused by running on Linux / Windows or that the compiler / optimizer generated code isn't sufficient, then you could look at other OSes and/or hand coded ASM. If you're looking for free, Linux / gcc is a good place to start.

dgnorton
+2  A: 

The OS does not make your program magically run slower. True, the OS does eat a few clock cycles here and there, but it's really not enough to be at all noticeable (and it does so in order to provide you with services you most likely need, and would need to re-implement yourself otherwise)

Windows doesn't, as some people seem to believe, eat 50% of your CPU. It might eat 0.5%, but so does Linux and OSX. And if you were to ditch all existing OS'es and instead write your own from scratch, you'd end up with a buggy, less capable OS which also eats a bit of CPU time.

So really, the environment doesn't matter.

What does matter is what hardware you run the program on (and here, running it on the GPU might be worth considering) and how well you utilize the hardware (concurrency is pretty much a must if you want to exploit modern hardware).

What code you write, and how you compile it does make a difference. The hardware you're running on makes a difference. The choice of OS does not.

jalf
+2  A: 
MaD70
The problem is, you'd have no other way to avoid that OS overhead anyway, even if you wrote your own minimalistic OS, it wouldn't save you much compared to the amount of work you'd need. If you really need the best performance, I'd recommend tuning the OS kernel for server setup to reduce the overhead of interactivity.
Lie Ryan
@Lie Ryan: I think you have a really strange idea of what constitutes "OS overhead". I clearly wrote what I'm referring about: OS overhead, in this context, is the mismatch between OS access mode to a resource and that required by an application for better performance. I have quoted and linked empirical evidence: a research project that demonstrated an 800% increase in performance.
MaD70
@MaD70: When was the paper written? The copyright date seems to indicate 1995, which is a little bit outdated. Anyway, from the little bit that I've read, the paper seems to be written under the assumption that there is only one resource manager in the OS. This is not true, at least for a Linux kernel. Linux can have multiple resource managers for each kind of resources (CPU scheduler, I/O, RAM allocator, etc), and people with special needs builds their kernel with the manager that suits their usage pattern. People with truly special need can also patch the kernel with costumized manager.
Lie Ryan
All in all, a factor of eight is not as impressive as it initially looks; due to Moore's Law, by just waiting for about 3 years (roughly same amount of time you'd probably need to implement and debug your costum resource manager), you could get 800% increase in computing power without any extra work for the same price. And if you really need the program right now instead of 3 years later, you can better save time by buying a more expensive hardware instead of writing (and debugging and testing) your own costum resource manager in an exokernel.
Lie Ryan
And in fact, for some cases, some application do manage their own resources (e.g. memory, paging, threading). Some applications (e.g. Virtual Machines) allocate a huge block of memory, and suballocate memory from the block to their subprocesses. Other applications write their own costum memory caching/paging. Some applications uses costum green threads, instead of OS threads. While this isn't as flexible and policy-free as true exokernel, it works good enough for the majority of cases.
Lie Ryan
+1  A: 

Just my two cents about DOS vs. Windows.

Years ago (something like 1998?), I had the same assumption.

I have some program written in QBasic (this was before I discovered C), which did intense calculations (neural network back-propagation). And it took time.

A friend offered to rewrite the thing in Visual Basic. I objected, because, you know, all those gizmos, widgets and fancy windows, you know, would slow down the execution of, you know, the important code.

The Visual Basic version so much outperformed the QBasic one that it became the default application (I won't mention the "hey, even in Excel's VBA, you are outperformed" because of my wounded pride, but...).

The point here, is the "you know" part.

You don't know.

The OS here is not important. As others explained in their answers, choose your hardware, and choose your language. And write your code in a clear way because now, compilers are better at optimizing code developers, unless you're John Carmack (premature optimization is the root of all evil).

Then, if you're not happy with the result, use a profiler to test your code. Consider multithreading (which will help you if you have multiple cores... TBB comes to mind).

paercebal
A: 

I am the original poster of this thread.

I am once again reiterating the emphasis that this program will have enormous number of calculations.

Windows & Ubuntu are multi-tasking environments. There are processes running and many of them are using processor resources. True many of them are seen as inactive. But still the Windows environment by the nature of multi-tasking is constantly monitoring the need to start up each process. For example currently there are 62 processes showing in the Windows Task Manager. According the task manager three are consuming CPU resouces. So we have three ongoing processes that are consuming CPU processing. There are an addition 59 showing active but consuming no CPU processing. So that is 63 being monitored by Windows and then there is the Windows that also is checking on various things.

I was hoping to find some way to just be able to run a program on the bare machine level. Side stepping all the Windows (Ubuntu) involvement.

The idea is very calculation intensive.

Thank you all for taking the time to respond.

Have a Great Day, Jim

Jim
You're doing a bunch of theoretical posturing. Try something, *anything*, and see if your decades-old assumptions hold up. Then you have a basis for saying "It's not fast enough."
DaveE