I'd like to better understand the reasons for .NET's application server model compared to that used by most Java application servers.
In most cases I've seen with ASP.NET web applications, business logic is hosted in the web server's asp.net hosts processes. Another common approach is to have a physically or logically different tier which hosts your business objects and then are exposed as web services or accessed via mechanisms like WCF. The latter approach typically but not always seems to be used when higher scale is required. In the days of COM objects I've seen Microsoft Transaction Server (MTS) and later COM+ hosting used to host COM objects containing business logic, with MTS (theoretically) managing object lifetime, transactions, concurrency yada yada. This model largely seems to have disappeared in ASP.NET land.
In the Java world you might have Apache with Tomcat as the servlet container and your business objects hosted in Tomcat. In this case, Tomcat provides similar functionality to what MTS provided in the .NET world.
Several questions:
- Why the fundamental difference in the Microsoft vs. Java approaches to application servers? This must have been an architecture/design choice when these frameworks were created.
- What are the pros and cons of each approach?
- Why did Microsoft move away from the MTS-hosting model (which is similar to the Tomcast servlet hosting model) to the more common current approach which is just to have business objects as part of the web server's ASP.NET process?
- If you wanted to implement the MTS type approach or the Tomcat type approach in ASP.NET applications today I assume a common pattern would be to host business objects in some IIS process (possibly on some different physical or logical tier) and access via WCF (or standard asmx web services, whatever). Is this a correct assumption?