views:

98

answers:

1

I'd like to better understand the reasons for .NET's application server model compared to that used by most Java application servers.

In most cases I've seen with ASP.NET web applications, business logic is hosted in the web server's asp.net hosts processes. Another common approach is to have a physically or logically different tier which hosts your business objects and then are exposed as web services or accessed via mechanisms like WCF. The latter approach typically but not always seems to be used when higher scale is required. In the days of COM objects I've seen Microsoft Transaction Server (MTS) and later COM+ hosting used to host COM objects containing business logic, with MTS (theoretically) managing object lifetime, transactions, concurrency yada yada. This model largely seems to have disappeared in ASP.NET land.

In the Java world you might have Apache with Tomcat as the servlet container and your business objects hosted in Tomcat. In this case, Tomcat provides similar functionality to what MTS provided in the .NET world.

Several questions:

  1. Why the fundamental difference in the Microsoft vs. Java approaches to application servers? This must have been an architecture/design choice when these frameworks were created.
  2. What are the pros and cons of each approach?
  3. Why did Microsoft move away from the MTS-hosting model (which is similar to the Tomcast servlet hosting model) to the more common current approach which is just to have business objects as part of the web server's ASP.NET process?
  4. If you wanted to implement the MTS type approach or the Tomcat type approach in ASP.NET applications today I assume a common pattern would be to host business objects in some IIS process (possibly on some different physical or logical tier) and access via WCF (or standard asmx web services, whatever). Is this a correct assumption?
+1  A: 

To my way of thinking, the primary difference is in the "open" approach vs. the "integrated stack" approach. Microsoft likes to provide everything as an integrated stack that all shares a common flavor and approach. Java is more friendly to the "bring your own x" model, where you may want to plug in your favorite application server, transaction manager, etc. Both technology stacks allow in-process invocation as well as remote invocation with varying levels of transaction support.

Really, WCF is not a new technology stack, but a reorganization and rebranding of existing elements of the .NET stack. Specifically, WCF took on the functions of .NET Remoting, Web Services, and distributed transactions.

You reference "the more common current approach which is just to have business objects as part of the web server's ASP.NET process." That is only common for non-distributed apps. It is a simple model that works well when all of your objects will reside on the same server. This follows Microsoft's "Scale Out" deployment model. Rather than segregating object tiers across servers, put everything but the database together on the web servers and then incrementally add identical, redundant servers to scale out the web-server layer.

Microsoft has been pushing hard lately on Service Oriented Architecture, which relies more heavily on WCF and remote invocation. This is seen as a key to the cloud strategy that would have people moving parts or all of their applications to flexible resources in the cloud (which MS would like to host with Azure and the like).

Jason
@Jason - I'm to split my answer into 3 comments due to the length. I totally understand your point about the open/bring your own x approach vs. the integrated stack approach, but what I'm asking is why, not because of their differences in philosophical approach but rather from a technical and architectural level. I don't understand why Microsoft's model satisfies the need for Microsoft applications and why the Java application server model does the same for Java applications.
Howiecamp
@Jason - (Part 2) - Architecturally (from a general application requirements perspective), the same concerns/issues ought to be present in both cases. Regarding WCF, I was simply using it as an example of how a .net web app might talk to some other logical or physically different process. It could be a can-and-string. That isn't core to my question.
Howiecamp
@Jason - (Part 3) - Obviously hosting objects inside the web server's asp.net process works if you don't need a distributed architecture. I'm just saying that in Microsoft land, this approach is more common. You'd want a distributed system if you wanted to distribute processing load for example. What I'm wondering is, if Microsoft's original approach with MTS gave you this ability, why did they then essentially take focus off MTS and focus more on an all-local approach?
Howiecamp
I don't know that Microsoft is more focussed on the all-local approach as much as developers who use Microsoft tools gravitate toward the all-local approach. If you read Microsoft's development literature over the past few years (MSDN Magazine, MSDN.com, blogs, etc.), Microsoft is very much pushing SOA architecture based on WCF. As a developer who's deployed large online solutions based on .NET, I generally lean to the local approach because the distributed approach ends up costing me performance at runtime and most of my projects don't involve sharing services across multiple applications.
Jason