views:

344

answers:

5

For some of monitoring applications and for tasks that are required to be scheduled to poll some service, we traditionally use a console application that in turn calls some methods in our business layer or polls file locations/ftp locations.

For another task I was carrying out I started playing about with Powershell and was pretty impressed, which got me thinking what are the benefits of a Powershell script and a console application.

Is seems the fact that the powershell script can be edited on the fly without recompiling which makes it a plus for potential changes, but there must be drawbacks that I am not seeing.

So when would people advise swapping a console application for a Powershell script?

+2  A: 

Well, it seems like your case is a near-perfect fit for the things Powershell was designed for. The only possible drawback I could imagine is that Powershell can be a little slow as it's interpreted and not compiled and also wasn't optimized for speed but rather ease of use.

Joey
+6  A: 

I think the better way to think of this is, when would you choose a console application?

If you're not concerned about bleeding-edge run-time speed, distribution to third parties (PowerShell isn't quite standard yet), or protecting source code then I think PowerShell is a strong contender.

By the way, PowerShell can manipulate COM objects out-of-the-box so in terms of task-automation it works quite well as glue-code between .NET and COM-based infrastructure.

hythlodayr
+1  A: 

You should also consider the size of the "app". If a single, smallish file can manage the task, then PowerShell is a great solution. Once you get beyond that, then you need to ask questions about the maintainability and understandability of the script vs. typical application code. (And source control shouldn't enter the equation since both should be stored there!)

John Fisher
+5  A: 

The biggest benefit to me is losing the compile process and rolling out of binaries. I'll give you an example. I had an application that used some assemplies out of the Visual Studio private assemblies folder, the app instrumneted binaries and ran Unit Tests during our compiel process. When VS 2008 came out, I had to change resources, re-compile, and then was going to have to roll out binaries to all of our build servers. I decided this was stupid, and switched to PowerShell so now my script figures out which version of vsts is installed and loads in the highest versioned dll's. Now you could do this in an app using reflection and late binding and stuff but it is much easier in PowerSHell, that and each Release ENgineer can just modify the script quickly in a text editor when we add binaries or remove binaries that we need to instrument. For smallish inhouse apps I always PowerSHell now...

Alex
+1  A: 

Don't underestimate the value of essentially free parameter parsing, which gets even better with advanced functions in V2. Think about all the little console apps you write and how much of that code is parameter parsing versus doing something interesting. Also think about how well you handle parameter parsing? Do you handle positional vs named parameters? How about parameter validation? Default parameter values? How about response files? While Posh doesn't support response files in a literal sense, in V2, there is a splatting operator that allows you to package parameters in either an array or hashtable - very similar capability.

OTOH at some point if my script starts getting huge and I'm invoking .NET code more than cmdlets, I start to think about writing a cmdlet to do the work. The VS debugger is still better than even the V2 debug capabilities.