Magnet pattern and overloaded methods
There is a significant difference in how Scala resolves implicit conversions from "Magnet Pattern" for non-overloaded and overloaded methods.
Suppose there is a trait Apply
(a variation of a "Magnet Pattern") implemented as follows.
trait Apply[A] {
def apply(): A
}
object Apply {
implicit def fromLazyVal[A](v: => A): Apply[A] = new Apply[A] {
def apply(): A = v
}
}
Now we create a trait Foo
that has a single apply
taking an instance of Apply
so we can pass it any value of arbitrary type A
since there an implicit conversion from A => Apply[A]
.
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
We can make sure it works as expected using REPL and this workaround to de-sugar Scala code.
scala> val foo = new Foo[String]{}
foo: Foo[String] = $anon$1@3a248e6a
scala> showCode(reify { foo { "foo" } }.tree)
res9: String =
$line21$read.foo.apply(
$read.INSTANCE.Apply.fromLazyVal("foo")
)
This works great, but suppose we pass a complex expression (with ;
) to the apply
method.
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1@5645b124
scala> var i = 0
i: Int = 0
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res10: String =
$line23$read.foo.apply({
$line24$read.`i_=`($line24$read.i.+(1));
$read.INSTANCE.Apply.fromLazyVal($line24$read.i)
})
As we can see, an implicit conversion has been applied only on the last part of the complex expression (ie, i
), not to the whole expression. So, i = i + 1
was strictly evaluated at the moment we pass it to an apply
method, which is not what we've been expecting.
Good (or bad) news. We can make scalac
to use the whole expression in the implicit conversion. So i = i + 1
will be evaluated lazily as expected. To do so we (surprize, surprize!) we add an overload method Foo.apply
that takes any type, but not Apply
.
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
And then.
scala> var i = 0
i: Int = 0
scala> val foo = new Foo[Int]{}
foo: Foo[Int] = $anon$1@3ff00018
scala> showCode(reify { foo { i = i + 1; i } }.tree)
res11: String =
$line28$read.foo.apply($read.INSTANCE.Apply.fromLazyVal({
$line27$read.`i_=`($line27$read.i.+(1));
$line27$read.i
}))
As we can see, the entire expression i = i + 1; i
i = i + 1; i
made it under the implicit conversion as expected.
So my question is why is that? Why the scope of which an implicit conversion is applied depends on the fact whether or not there is an overloaded method in the class.
Now, that is a tricky one. And it's actually pretty awesome, I didn't know that "workaround" to the "lazy implicit does not cover full block" problem. Thanks for that!
What happens is related to expected types, and how they affect type inference works, implicit conversions, and overloads.
Type inference and expected types
First, we have to know that type inference in Scala is bi-directional. Most of the inference works bottom-up (given a: Int
and b: Int
, infer a + b: Int
), but some things are top-down. For example, inferring the parameter types of a lambda is top-down:
def foo(f: Int => Int): Int = f(42)
foo(x => x + 1)
In the second line, after resolving foo
to be def foo(f: Int => Int): Int
, the type inferencer can tell that x
must be of type Int
. It does so before typechecking the lambda itself. It propagates type information from the function application down to the lambda, which is a parameter.
Top-down inference basically relies on the notion of expected type. When typechecking a node of the AST of the program, the typechecker does not start empty-handed. It receives an expected type from "above" (in this case, the function application node). When typechecking the lambda x => x + 1
in the above example, the expected type is Int => Int
, because we know what parameter type is expected by foo
. This drives the type inference into inferring Int
for the parameter x
, which in turn allows to typecheck x + 1
.
Expected types are propagated down certain constructs, eg, blocks ( {}
) and the branches of if
s and match
es. Hence, you could also call foo
with
foo({
val y = 1
x => x + y
})
and the typechecker is still able to infer x: Int
. That is because, when typechecking the block { ... }
, the expected type Int => Int
is passed down to the typechecking of the last expression, ie, x => x + y
.
Implicit conversions and expected types
Now, we have to introduce implicit conversions into the mix. When typechecking a node produces a value of type T
, but the expected type for that node is U
where T <: U
is false, the typechecker looks for an implicit T => U
(I'm probably simplifying things a bit here, but the gist is still true). This is why your first example does not work. Let us look at it closely:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
When calling foo.apply
, the expected type for the parameter (ie, the block) is Apply[Int]
( A
has already been instantiated to Int
). We can "write" this typechecker "state" like this:
{
i = i + 1
i
}: Apply[Int]
This expected type is passed down to the last expression of the block, which gives:
{
i = i + 1
(i: Apply[Int])
}
at this point, since i: Int
and the expected type is Apply[Int]
, the typechecker finds the implicit conversion:
{
i = i + 1
fromLazyVal[Int](i)
}
which causes only i
to be lazified.
Overloads and expected types
OK, time to throw overloads in there! When the typechecker sees an application of an overload method, it has much more trouble deciding on an expected type. We can see that with the following example:
object Foo {
def apply(f: Int => Int): Int = f(42)
def apply(f: String => String): String = f("hello")
}
Foo(x => x + 1)
gives:
error: missing parameter type
Foo(x => x + 1)
^
In this case, the failure of the typechecker to figure out an expected type causes the parameter type not to be inferred.
If we take your "solution" to your issue, we have a different consequence:
trait Foo[A] {
def apply(a: Apply[A]): A = a()
def apply(s: Symbol): Foo[A] = this
}
val foo = new Foo[Int] {}
foo({
i = i + 1
i
})
Now when typechecking the block, the typechecker has no expected type to work with. It will therefore typecheck the last expression without expression, and eventually typecheck the whole block as an Int
:
{
i = i + 1
i
}: Int
Only now, with an already typechecked argument, does it try to resolve the overloads. Since none of the overloads conforms directly, it tries to apply an implicit conversion from Int
to either Apply[Int]
or Symbol
. It finds fromLazyVal[Int]
, which it applies to the entire argument. It does not push it inside the block anymore, giving:
fromLazyVal({
i = i + 1
i
}): Apply[Int]
In this case, the whole block is lazified.
This concludes the explanation. To summarize, the major difference is the presence vs absence of an expected type when typechecking the block. With an expected type, the implicit conversion is pushed down as much as possible, down to just i
. Without the expected type, the implicit conversion is applied a posteriori on the entire argument, ie, the whole block.
上一篇: 默认情况下为可变参数空白的情况
下一篇: 磁铁模式和重载方法