views:

98

answers:

3

Hi,

imagine something like this:

import class B.*;


interface A supports A.testSum
{
   int sum( int a , int b ) access from B.calculator;

   testSum() { Assert(sum(1,1)==2); }

........


class B ...
{
  void calculator() {  A.sum(3,5); //ok }
  void someOtherMethod() { A.sum(0,3); //compile error }

the idea of the "supports" is secondary but relevant since the test applies to the interface in this case (so the language would discriminate between an interface test, which all implementations must pass and a implementation test, which is specific to the implementation privates

but the important idea i want to convey here is the access control semantics; notice that A.sum with "access from" keyword can only be called from the method B.calculator. Anything else is detected as a compile time error. The idea here is to enforce architectural constraints in a more granular way. If you didn't add an "access from" or just added "access from *" it would mean the default behavior of allowing the method to be called from anywhere. What sort of architectural constraints? well, the kind that are manually enforced when doing a layered design: Layer A(lowest level) is used from layer B(intermediate level), which is in turn used from layer C(high level). But layer B is not accessible from layer A, and layer C is not accesible from neither A or B, but it is public otherwise (it might be what the end user will have direct access)

question: do you know any language (including source-to-source intermediate languages) that support the above semantics? extra points for discussing if this kind of semantics would be counterproductive, dangerous or just encouraging bad design

A: 

This it do-able in Ruby, albeit with a different syntax. Take the following:

module T
    def check
        raise unless self.is_a?(Ca)
        raise unless %r{in `good_func'} =~ caller.first #`
        true
    end
end

class Ca
    include T
    def good_func
        check
    end
    def bad_func
        check
    end
end

class Cb
    include T
    def good_func
        check
    end
    def bad_func
        check
    end
end

a = Ca.new
b = Cb.new

a.good_func
=> true
a.bad_func
=> (RuntimeError)

b.good_func
=> (RuntimeError)
b.bad_func
=> (RuntimeError)

When using the module as a mix-in, self corresponds to the class that includes the module. caller returns the current callstack, and caller.first gets you the first entry on the callstack (that is, the function that called this one).

bta
You'll have to forgive the syntax highlighter, the backtick is throwing it off. The comment at the end of the fourth line is meaningless and is only there to get the syntax highlighter back on track. Is there a cleaner way to escape that character? I forget all of the markup rules...
bta
syntax differences are Ok as long as the semantics is the same. However this check would happen at runtime right? so we validate the constraint each time the function is called, but it would be more satisfying to detect a violation at compile-time and avoid intruducing unnecesary runtime overhead
lurscher
I would argue that if a function should only be callable by a specific function, then it should also exist as a logical sub-part of that function. Either make it a private sub-function that only exists inside the scope of `calculator` or make `calculator` an object with a public `calculate` method and a private `sum` method.
bta
@lurscher- Compiled languages aren't typically going to have this kind of functionality because they don't tend to have the introspection capabilities of an interpreted language. With the Ruby code, you can include the checks when you run your unit tests and omit them in the production code (as long as you are sure that the production code will not be modified).
bta
@bta, the idea is that sometimes an interface is not always implemented to be 100% reusable, so enforcing what can call you is a way to clearly state that the reusability of the current spec is limited. On the second comment, i think this kind of constraint can perfectly be enforced at compile-time. Notice that in the example above B can also be an interface so the constraint would apply to all subclasses. Why would i need instrospection to enforce it?
lurscher
@lurscher- A function can't necessarily determine who is calling it at compile time (especially if function pointers are used). The easiest way to restrict function A to function B is to limit the scope of function A. If function A only exists inside of function B, then by definition it can't be called elsewhere. C/C++ doesn't support local functions, but it will let you put a *prototype* to function A inside of function B. If function A is not prototyped anywhere else, then it will only be accessible inside function B.
bta
that is correct, you can always override the compile-time checks with unsafe pointer casts, but at least the language made and effort to not let you blown your own feet.The only idea i'm bringing to the table is that access in most languages is almost binary: public or private (plus a few intermediate states like sealed in c#) but no way to provide (sort of) access groups to classes.The interesting question is; would this made code more brittle and coupled? i'm trying to weight all the aspects of this model
lurscher
More important then "brittle" or "coupled", I would say that this technique promotes poor design habits by encouraging related functionality to be spread between separate objects/functions. You are essentially wanting to control the scope of a function. In most languages, this is done through the layout and design of your classes and functions, not by manual specifiers. Enforcing scope through code structure ensures that the scoping rules are applied consistently and predictably, and helps promote encapsulation and logical code structure.
bta
+2  A: 

This kinda sounds like a special case of the object-capability model. Perhaps there are languages that implement this in some way.

Similarly, a quick google around for "method-level security" led me to a few things that the enterprise Java community seem to have cooked up. I think that specialising this approach to just method-calling is kinda pointless. Unless you have a very good reason for doing this, I think it's probably a bad idea. If you're really interested in doing it for some reason, then really the model should be to get the receiver to check that the invocation source is in some permitted set.

In any case, this is basically breaking most programming models pretty badly. You'd be much better off enforcing preconditions and class invariants to ensure that any method invocation (from anywhere!) is meaningful or well-behaved. If you're using it to enforce method ordering, that can be achieved using invariant checking (statically or at runtime), or theoretical models such as Interface Automata.

Gian
that makes sense, design by contract is a better and more generic kind of constraints
lurscher
+1  A: 

Java supports something pretty much the same thing.

First of all, visibility of fields and methods are enforced at runtime, it is not possible for unprivileged code to bypass this.

You can also make your own privileges, and grant them to certain parts of code. For example, to open a file, the code that wants to access a file needs FilePermission for that file. You can make any kind of permission you wish though, it's possible to make a permission called SumPermission which Calculator checks before summing, and only grant it to whatever classes you want. Protection domains span across classes, not individual methods in the classes, because a whole class is generally obtained from a single source. The model in fact goes deeper that what you proposed. Every class on the stack (including the history of thread creations) leading up to a security check must have the permission, so if some untrusted code calls your code that has SumPermission, it will fail the security check. Of course this is only default, whenever you do anything that needs permissions, you can use a doPrivileged block, to tell the upcoming check to only check your permissions instead of both yours and your callers.

However, the current default security scheme in Java has many limitations. For one, untrusted code can't subdivide its permissions or define its own permissions for nested untrusted code. Also, it's a pain to guard against untrusted code that blocks.

You may want to check out E. In particular, it follows the Object-Capability Model. It is made for mutually untrusted code to interact securely, and has language level constructs to prevent deadlocking issues.

It's perfectly possible and feasible to implement robust behavior between mutually untrusted code in Java, but E will probably make your job much easier, and runs on the JVM so you should still be able to use Java libraries and libraries from any other languages that use the JVM.

Longpoke