In Hakell, where clauses hold local definitions to a function. Scala does not have explicit where clauses, but the same functionality can be achieved by having local var
, val
and def
.
Local `var` and `val`
In Scala:
def foo(x: Int, y: Int): Int = {
val a = x + y
var b = x * y
a - b
}
In Haskell:
foo :: Integer -> Integer -> Integer
foo x y = a - b
where
a = x + y
b = x * y
Local `def`
In Scala
def foo(x: Int, y: Int): Int = {
def bar(x: Int) = x * x
y + bar(x)
}
In Haskell
foo :: Integer -> Integer -> Integer
foo x y = y + bar x
where
bar x = x * x
Please correct me if I have made any syntax errors in the Haskell example, as I currently have no Haskell compiler installed on this computer :).
More complicated examples can be achieved in similar ways (for example using pattern matching, which both languages have support for). Local functions have exactly the same syntax as any other function, just that their scope is the block they are in.
EDIT: Also see Daniel's answer for such an example and some elaboration on the subject.
EDIT 2: Added a discussion about lazy
var
s and val
s.
Lazy `var` and `val`
Edward Kmett's answer rightly pointed out that Haskell's where clause has laziness and purity. You can do something very similar in Scala using lazy
variables. These are only instantiated when needed. Consider the following example:
def foo(x: Int, y: Int) = {
print("--- Line 1: ");
lazy val lazy1: Int = { print("-- lazy1 evaluated "); x^2}
println();
print("--- Line 2: ");
lazy val lazy2: Int = { print("-- lazy2 evaluated "); y^2}
println();
print("--- Line 3: ");
lazy val lazy3: Int = { print("-- lazy3 evaluated ")
while(true) {} // infinite loop!
x^2 + y^2 }
println();
print("--- Line 4 (if clause): ");
if (x < y) lazy1 + lazy2
else lazy2 + lazy1
}
Here lazy1
, lazy2
and lazy3
are all lazy variables. lazy3
is never instantiated (therefore this code never enters in an infinite loop) and the order of instantiation of lazy1
and lazy2
depends on the arguments of the function. For example when you call foo(1,2)
you will get lazy1
instantiated before lazy2
and when you call foo(2,1)
you will get the reverse. Try the code out in the scala interpreter and see the printout! (I won't put it here as this answer is already quite long).
You could achieve similar results if instead of lazy variables you used no-argument functions. In the example above, you could replace every lazy val
with a def
and achieve similar results. The difference is that lazy variables are cached (aka only evaluated once) but a def
is evaluated every time it is invoked.
EDIT 3: Added a discussion about scoping, see question.
Scope of local definitions
Local definitions have the scope of the block they are declared in, as expected (well, most of the time, in rare situations they can escape the block, like when using mid-stream variable binding in for loops) . Therefore local var
, val
and def
can be used to limit the scope of an expression. Take the following example:
object Obj {
def bar = "outer scope"
def innerFun() {
def bar = "inner scope"
println(bar) // prints inner scope
}
def outerFun() {
println(bar) // prints outer scope
}
def smthDifferent() {
println(bar) // prints inner scope ! :)
def bar = "inner scope"
println(bar) // prints inner scope
}
def doesNotCompile() {
{
def fun = "fun" // local to this block
42 // blocks must not end with a definition...
}
println(fun)
}
}
Both innerFun()
and outerFun()
behave as expected. The definition of bar
in innerFun()
hides the bar
defined in the enclosing scope. Also, the function fun
is local to its enclosing block, so it cannot be used otherwise. The method doesNotCompile()
... does not compile. It is interesting to note that both println()
calls from the smthDifferent()
method print inner scope
. Therefore, yes, you can put definitions after they are used inside methods! I wouldn't recommend though, as I think it is bad practice (at least in my opinion). In class files, you can arrange method definitions as you like, but I would keep all the def
s inside a function before they are used. And val
s and var
s ... well ... I find it awkward to put them after they are used.
Also note that each block must end with an expression not with a definition, therefore you cannot have all the definitions at an end of a block. I would probably put all the definitions at the start of a block, and then write all my logic producing a result at the end of that block. It does seem more natural that way, rather then:
{
// some logic
// some defs
// some other logic, returning the result
}
As I previously said, you cannot end a block with just // some defs
. This is where Scala slightly differs from Haskell :).
EDIT 4: Elaborated on defining stuff after using them, prompted by Kim's comment.
Defining 'stuff' after using them
This is a tricky thing to implement in a language that has side effects. In a pure-no-side-effect world, the order would not be important (methods would not depend on any side-effects). But, as Scala allows side effects, the place where you define a function does matter. Also, when you define a val
or var
, the right hand side must be evaluated in place in order to instantiate that val
. Consider the following example:
// does not compile :)
def foo(x: Int) = {
// println *has* to execute now, but
// cannot call f(10) as the closure
// that you call has not been created yet!
// it's similar to calling a variable that is null
println(f(10))
var aVar = 1
// the closure has to be created here,
// as it cannot capture aVar otherwise
def f(i: Int) = i + aVar
aVar = aVar + 1
f(10)
}
The example you give does work though if the val
s are lazy
or they are def
s.
def foo(): Int = {
println(1)
lazy val a = { println("a"); b }
println(2)
lazy val b = { println("b"); 1 }
println(3)
a + a
}
This example also nicely shows caching at work (try changing the lazy val
to def
and see what happens :)
I still thing in a world with side effects it's better to stick with having definitions before you use them. It's easier to read source code that way.
-- Flaviu Cipcigan