I know that Monad
can be expressed in Scala as follows:
trait Monad[F[_]] {
def flatMap[A, B](f: A => F[B]): F[A] => F[B]
}
I see why it is useful. For example, given two functions:
getUserById(userId: Int): Option[User] = ...
getPhone(user: User): Option[Phone] = ...
I can easily write function getPhoneByUserId(userId: Int)
since Option
is a monad:
def getPhoneByUserId(userId: Int): Option[Phone] =
getUserById(userId).flatMap(user => getPhone(user))
...
Now I see Applicative Functor
in Scala:
trait Applicative[F[_]] {
def apply[A, B](f: F[A => B]): F[A] => F[B]
}
I wonder when I should use it instead of monad . I guess both Option and List are Applicatives
. Could you give simple examples of using apply
with Option and List and explain why I should use it instead of flatMap
?
To quote myself:
So why bother with applicative functors at all, when we've got monads? First of all, it's simply not possible to provide monad instances for some of the abstractions we want to work with—
Validation
is the perfect example.Second (and relatedly), it's just a solid development practice to use the least powerful abstraction that will get the job done. In principle this may allow optimizations that wouldn't otherwise be possible, but more importantly it makes the code we write more reusable.
To expand a bit on the first paragraph: sometimes you don't have a choice between monadic and applicative code. See the rest of that answer for a discussion of why you might want to use Scalaz's Validation
(which doesn't and can't have a monad instance) to model
validation.
About the optimization point: it'll probably be a while before this is generally relevant in Scala or Scalaz, but see for example the documentation for Haskell's Data.Binary
:
The applicative style can sometimes result in faster code, as
binary
will try to optimize the code by grouping the reads together.
Writing applicative code allows you to avoid making unnecessary claims about dependencies between computations—claims that similar monadic code would commit you to. A sufficiently smart library or compiler could in principle take advantage of this fact.
To make this idea a little more concrete, consider the following monadic code:
case class Foo(s: Symbol, n: Int)
val maybeFoo = for {
s <- maybeComputeS(whatever)
n <- maybeComputeN(whatever)
} yield Foo(s, n)
The for
-comprehension desugars to something more or less like the following:
val maybeFoo = maybeComputeS(whatever).flatMap(
s => maybeComputeN(whatever).map(n => Foo(s, n))
)
We know that maybeComputeN(whatever)
doesn't depend on s
(assuming these are well-behaved methods that aren't changing some mutable state behind the scenes), but the compiler doesn't—from its perspective it needs to know s
before it can start computing n
.
The applicative version (using Scalaz) looks like this:
val maybeFoo = (maybeComputeS(whatever) |@| maybeComputeN(whatever))(Foo(_, _))
Here we're explicitly stating that there's no dependency between the two computations.
(And yes, this |@|
syntax is pretty horrible—see this blog post for some discussion and alternatives.)
The last point is really the most important, though. Picking the least powerful tool that will solve your problem is a tremendously powerful principle. Sometimes you really do need monadic composition—in your getPhoneByUserId
method, for example—but often you don't.
It's a shame that both Haskell and Scala currently make working with monads so much more convenient (syntactically, etc.) than working with applicative functors, but this is mostly a matter of historical accident, and developments like idiom brackets are a step in the right direction.