« Home

A Monadic view of asynchronous programming using Guava

Let’s first start with a definition of what we mean by asynchronous programming.

In normal (sequential) programming, computation starts in one execution context and then continues by executing one instruction after another, within the same context. In this style, it is easy to follow the “flow” of the program as one can just step through the instructions, including function calls and returns to reason about how the program executes.

By asynchronous programming, we are referring to that style of programming, where the computation is spread across different execution contexts. This is useful if the cost of the computation is greater than the cost of switching across these different contexts.

This cost of switching is never free, as at the CPU level, it would involve suspending the current context, saving the program data, which includes things like register variables, program counters, stack references and then swapping in the new data for the next context. In case the data for the new context is not in the cache, there could be futher costs due to cache miss and data loads from memory.

In practice, this is most useful in the case of I/O related computation, since the cost of waiting for I/O is so large (compared to cache and memory access), that switching I/O operations to a different context and then resuming it when it completes, frees up the original context to perform other operations and thus lead to better utilization of the compute resources. For the end-user, it means that the same computer can now do “more work”.

While the benefit is clear, namely, more bang for the buck, a serious problem one quickly runs into with this style of programming, is that it becomes very hard to code and follow the “flow” of execution. Attempts to do this, typically lead to statements like “callback hell” etc., which reflects the frustration of trying to make this work.

So the question arises: Is there a way to represent this style of computation, such that we can reason about and see the “flow” of code across these execution contexts, independent of the actual steps being performed during each computation?

Some languages are trying to solve this problem, using async/await, where using the await keyword, the programmer writes the code in a sequential style. This compiler then transforms this code into the appropriate context switches and callbacks so that the actual execution of the code now does the context switches “under the covers” so to speak.

Monad

It turns out, that applying the concepts of Haskell Monads might be another way of thinking about the same problem. Monad’s in Haskell provide a way to represent stateful computation, as is defined as follows:

class Applicative m => Monad (m :: * -> *) where
  (>>=) :: m a -> (a -> m b) -> m b
  (>>) :: m a -> m b -> m b
  return :: a -> m a
  fail :: String -> m a
  {-# MINIMAL (>>=) #-}

If you are not familiar with Haskell, one way of thinking about this is that a type m can be said to have Monadic behavior if it supports the following operations

Finally, it also specifies that the minimal implementation of a Monad is the operation >>=.

So why is this useful? Turns out, that this is an extremely elegant way to represent “flow of computation within a context”, where the context can be anything we want it to be!

Let’s look at it from the point of view of performing simple arithmetic computations in two different contexts, namely, Maybe and Either.

The Maybe data type can be used in Haskell to represent optional values. It’s defined as follows:

data Maybe a = Nothing | Just a

which can be read as, a Maybe value a (pun intended), can either be just the value a, represented as Just a or Nothing (no value). So, let’s say we want to be able to perform arithmetic computations with Maybe numbers and not just plain numbers. Here, if the value is Nothing, then the result of the computation would also be Nothing, but if we have Just a, we want to be able to do regular math with it.

Firing up GHCi (the REPL for haskell), let’s first define the following functions:

Prelude> :set +t
Prelude> let maybeAdd2 x = Just $ x + 2
maybeAdd2 :: Num a => a -> Maybe a
Prelude> let maybeMul8 x = Just $ x * 8
maybeMul8 :: Num a => a -> Maybe a

The :set +t is just incantation to get ghci to print the type signature of the result of expressions we define. The first function maybeAdd2, takes a value x and returns the Monadic (Maybe) value of adding 2 to it. The $ sign is a way to compose the Just constructor of the Maybe value with the result of the computation x + 2. Similarly, we define maybeMul8 as a function that names a number and returns the Monadic result of multiplying it by 8.

The key thing to note here is the type signature of both these methods, which read Num a => a -> Maybe a. This is Haskell’s way of saying that, given a value a of type Num (representing numeric values), these functions then produce a value of type Maybe a, which co-incidentally, exactly match the kind of function signature that >>= expects to work with ;-).

Given that Haskell already includes an implementation of the Maybe type as a Monad, we can then compose these arithmetic operations as follows:

Prelude> return 2 >>= maybeAdd2 >>= maybeMul8
Just 32
Prelude> return 2 >>= maybeAdd2 >>= maybeMul8 >>= maybeAdd2 >>= maybeMul8
Just 272
Prelude> return 2 >>= maybeAdd2 >>= maybeMul8 >>= maybeAdd2 >>= maybeMul8 >>= maybeMul8
Just 2176

Arithmetic on optional values! But what about failures? In the case of optional values, failures can be thought of as ‘no value’, or in other words, Nothing. Let’s define a function maybeFail that given any value, produces Nothing, add it to the computation and see what happens:

Prelude> let maybeFail x = Nothing
maybeFail :: t -> Maybe a
Prelude> return 2 >>= maybeAdd2 >>= maybeMul8 >>= maybeAdd2 >>= maybeFail >>= maybeMul8 >>= maybeMul8
Nothing
Prelude> return 2 >>= maybeAdd2 >>= maybeFail >>= maybeMul8
Nothing
Prelude> return 2 >>= maybeFail >>= maybeAdd2 >>= maybeMul8
Nothing

It just works. The values flow through the computation, because the Monadic definition of >>= operation on Maybe type is defined such that if the left value is Nothing, then just return Nothing and don’t perform any computation through the function. [Note: the t -> Maybe a signature means that the type of the value of x does not matter and it works for any type x might have, represented by t or true for all types].

Similarly, the Either data type is used to represent Right value of one type or a Left (might be error) value of another type. It’s defined as:

data Either a b = Left a | Right b

As in the case of Maybe type, we can define equivalent operations for Either type as follows:

Prelude> let eitherAdd2 x = Right $ x + 2
eitherAdd2 :: Num b => b -> Either a b
Prelude> let eitherMul8 x = Right $ x * 8
eitherMul8 :: Num b => b -> Either a b

And the operations compose exactly as before:

Prelude> return 2 >>= eitherAdd2 >>= eitherMul8
Right 32
Prelude> return 2 >>= eitherAdd2 >>= eitherMul8 >>= eitherAdd2 >>= eitherMul8
Right 272
Prelude> return 2 >>= eitherAdd2 >>= eitherMul8 >>= eitherAdd2 >>= eitherMul8 >>= eitherMul8
Right 2176

The equivalent for failure for Either type is a Left value, and we can introduce a failing function, and it composes just as before:

Prelude> let eitherFail x = Left "Dam broke and number overflowed!"
eitherFail :: t -> Either [Char] b
Prelude> return 2 >>= eitherAdd2 >>= eitherMul8 >>= eitherAdd2 >>= eitherFail >>= eitherMul8 >>= eitherMul8
Left "Dam broke and number overflowed!"
Prelude> return 2 >>= eitherAdd2 >>= eitherFail >>= eitherMul8
Left "Dam broke and number overflowed!"
Prelude> return 2 >>= eitherFail >>= eitherAdd2 >>= eitherMul8
Left "Dam broke and number overflowed!"

The key insight here is that Monads allows one to represent the composition of operations, independent of the what the actual operations are. These operations occur within a Context called a Monad. As can be seen, the Context can be anything, as long as they follow the rules and can then be composed.

Guava

So coming back to asynchronous programming, Google’s Guava Java library contains the concurrent package which is a collection of utilities to do concurrent and parallel programming in Java. The heart of this package consists of the ListenableFuture abstraction and a set of utilities in the Futures class.

If we were to consider the ListenableFuture as the type representing context (equivalent to Either and Maybe above), we could think of the asynchronous programming as a Monoad, logically defined as follows:


-- this code will not work, and is an abstraction, 
-- but is represented here for conceptualization ...

instance Monad ListenableFuture where
    >>=     = Futures.transformAsync
    return  = Futures.immediateFuture
    fail    = throw

while this is not exactly working code, it provides a nice way to “reason” about how the various steps in the computation can be composed. We can look at Futures.transform as an abstraction over Futures.transformAsync to adapt (or hoist) non asynchronous code in the asynchronous space.

package org.satyadeep.monadicguava;

import com.google.common.base.Function;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;

import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executor;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Application {

    public static void main(String[] args) throws Exception {
        final ExecutorService executor = Executors.newFixedThreadPool(4);

        Function<Integer, Integer> add2 = x -> x + 2;
        Function<Integer, Integer> mul8 = x -> x * 8;
        Function<Integer, Integer> fail = x -> {
            throw new RuntimeException("Dam broke and number overflowed!");
        };
        
        // composing computations

        printResult(
                MonadicListenableFuture.hoist(2, executor)
                        .compose(add2).compose(mul8)
        );
        printResult(
                MonadicListenableFuture.hoist(2, executor)
                        .compose(add2).compose(mul8).compose(add2).compose(mul8)
        );

        // simulating failure

        printResult(
                MonadicListenableFuture.hoist(2, executor)
                        .compose(add2).compose(mul8).compose(fail).compose(add2).compose(mul8)
        );

        executor.shutdown();
    }

    static <V> void printResult(MonadicListenableFuture<V> m) {
        try {
            V v = m.getFuture().get();
            System.out.println(v);
        } catch (ExecutionException e) {
            System.out.println("Error executing chain: " + e.getCause());
        } catch (InterruptedException ie) {
            System.out.println("Execution interrupted: " + ie);
        }
    }

    public static class MonadicListenableFuture<V> {
        final ListenableFuture<V> future;
        final Executor exec;

        private MonadicListenableFuture(ListenableFuture<V> f, Executor e) {
            future = f;
            exec = e;
        }

        public static <V> MonadicListenableFuture<V> hoist(V v, Executor e) {
            return new MonadicListenableFuture<>(Futures.immediateFuture(v), e);
        }

        public <O> MonadicListenableFuture<O> compose(Function<V, ? extends O> func) {
            return new MonadicListenableFuture<>(Futures.transform(future, func, exec), exec);
        }

        public ListenableFuture<V> getFuture() {
            return future;
        }
    }
}

Running this program produces the following result

32
272
Error executing chain: java.lang.RuntimeException: Dam broke and number overflowed!

The Java source is hosted at MonadicGuava on GitHub

Happy composing!