views:

584

answers:

4

I am about to dive into a rules oriented project (using ILOGs Rules for .NET - now IBM). And I have read a couple different perspectives regarding how to set up the rules processing and how to interact with the rule engine.

The two main thoughts I have seen is to centralize the rule engine (into its own farm of servers) and program against the farm via a web service API (or in ILOG's case via WCF). The other side is to run an instance of the rule engine on each of your app servers and interact with it locally with each instance having its own copy of the rules.

The up side to centralization is the ease of deployment of the rules to a centralized location. The rules scale as they need to rather than scaling each time you expand your application server configuration. This reduces waste from a purchased license perspective. The down side to this set up is the added overhead of making service calls, network latency, etc.

The upside/downside to locally running the rule engine is the exact opposite of the centralized configuration's upside/downside. No slow service calls (fast API calls), no network issues, each app server relies on it self. Managing deployment of rules becomes more complex. Each time you add a node to your app cloud you will need more licenses for rule engines.

In reading white papers I see that Amazon is running the rule engine per app server configuration. They appear to do a slow deployment of rules and recognize that the lag in rule publishing is "acceptable" even though business logic is out of a sync for a given period of time.

Question: From your experiences, what is the best way to start integrating rules into a .net based web app for a shop that has not yet spent much time working in a rules driven world?

A: 

I never liked the centralization argument. It means that everything is coupled into the rules engine, which becomes a dumping ground for all the rules in the system. Pretty soon you can't change anything for fear of the unknown: "What will we break?"

I much prefer following Amazon's idea of services as isolated, autonomous components. I interpret that to mean that services own their data and their rules.

This has the added benefit of partitioning the rules space. A rule set becomes harder to maintain as it grows; better to keep them to a manageable size.

If parts of the rule set are shared, I'd prefer a data-driven, DI approach where a service can have its own instance of a rules engine and load the common rules from a database on startup. This might not be feasible if your iLog license makes multiple instances cost prohibitive. That would be a case where product that's supposed to be helping might actually be dictating architectural choices that will bring grief. It would be a good argument for a less expensive alternative (e.g., JBoss Rules in Java-land).

What about a data-driven decision tree approach? Is a Rete rules engine really necessary, o is the "enterprise tool" decision driving your choice?

I'd try to set up the rules engine so it was as decoupled from the rest of the enterprise as possible. I wouldn't have it calling out to databases or services if I could. Better to make that the responsibility of the objects asking for a decision. Let them call to the necessary web services and databases to assemble the necessary data, pass it to the rules engine, and let it do its thing. Coupling is your enemy: Try to design your system to minimize it. Keeping rules engines isolated is a good way to do it.

duffymo
A: 

In my experience with rules engines, we've applied a pretty basic set of practices to govern interaction with the rules engine. First of all, these have always been commercial rules engines (iLog, Corticon) and not open source (Drools), hence deploy locally to each of the app servers has never really been a viable option due to licensing costs. Hence, we've always gone with the centralized model, albeit in two primary flavors:

  • Remote Execution of Web Service - In the same way you specified in your question, we make calls to SOAP-based services provided by the rules engine product. Within the web service realm, we have come upon several options: (1) "Boxcar" the requests, allowing the application to queue up rules processing requests and send them over in chunks as opposed to one-off messages; (2) Tune the threading and process options provided by the vendor. This includes allowing separating decision services out by function and allocating each a W3WP and/or using web gardens. There is an aweful lot of tweaking you can do with boxcars, threads, and processes and getting the right mix is more a process of trial and error (and knowing your rulesets and data) than an exact science.
  • Remotely Call the Rules Engine in Process - A classic batch style trick to avoid the overhead of serialization and de-serialization. Remotely make a call that fires up an in-process call to the rules engine. This can be done either scheduled (e.g. batch) or based upon demand (i.e. "boxcars" of requests). Either way a lot of the overhead of the service calls can be avoided by interacting directly with the process and the database. Downside of this process is that you don't have IIS or your EJB/Servlet container managing the threads for you and you have to do it yourself.
Thomas Beck
A: 

I don't have much to say on the "which server" question but I would urge you to develop decision services - callable services that use rules to make decisions but that do not change the state of the business. Letting the calling application/service/process decide what data changes to make as a result of calling the decision service and having the calling component actually initiate the action(s) suggested by the decision service makes it easier to use the decision service over and over again (across channels, processes etc). The cleaner and less tied to the rest of the infrastructure the decision service the more reusable and manageable it is going to be. The discussion here on ebizQ might be worth reading in this regard.

James Taylor
A: 

We're using ILOG For DotNet and have a deployed pilot project.

Here's a summary of our immature Rules Architecture:

  1. All data-access done outside of rules.
  2. Rules are deployed the same way as code (source control, release process, yada yada).
  3. Projects (services) that use Rules have a reference to ILOG.Rules.dll and new-up RuleEngines via a custom pooling class. RuleEngines are pooled because it is expensive to bind a RuleSet to a RuleEngine.
  4. Almost all rules are written to expect Assert'd objects, rather than RuleFlow parameters.
  5. Since the rules run in the same memory space, instances that are modified by the rules are the same instances in the program - which is immediate propagation of state.
  6. Almost all rules are run via RuleFlow (even if it is a single RuleStep in the RuleFlow).

We're looking at RuleExecutionServer as an hosting platform as well as RuleTeamServerForSharePoint to be the host for rules source. Eventually, we will have Rules deployed to production outside of the code release process.

The primary obstacle in all our Rule endeavors is Modeling and Rule Authoring skillsets.

David B