# Project Euler in F#: Problem 8

I’ve been trying to teach myself F# using the Project Euler problems, and I’m starting to feel I’m getting somewhere with the language. The few Euler problems I’ve solved so far have had very straightforward and natural solutions.

Problem 8 is as follows: Find the subsequence of 5 consecutive digits that yield the greatest product when multiplied together, in the 1000-digit number:

73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450

I was able to come up with an F# solution that is one line, plus a helper line to convert the string into a sequence of digits:

```let str = "731<...>"

let digits = Seq.map (fun x -> int (Char.GetNumericValue x)) str

let maxproduct num list =
Seq.max (Seq.map (fun x -> Seq.reduce (*) x) (Seq.windowed num list))```

The value digits is just a list of the digits in the string, converted into integers. The function maxproduct works the obvious way: take every subsequence of five digits (Seq.windowed), multiply them together (Seq.reduce, applied to each element of the sequence with Seq.map) and then find the maximum (Seq.max).

The only reason this needs quite so little work is the existence of Seq.windowed in the standard library, which does exactly the right thing in turning a 1000-element list into 996 arrays of subsequences of consecutive digits.

I’m not sure I like ramming all the functions into one line, and I’m sure there must be a way to combine map and reduce without the lambda, which adds a lot of clutter. If this was real code, it would need quite a lot of work to make it readable. However, the standard library is a big win, because the process of ‘windowing’ a sequence is nicely separated from the code. It’s also nice (for toy problems like this, at any rate) that the program is pretty much a definition of the problem, with little thought being necessary as to how to do the processing.

# Implementing the Graham Scan for a convex hull in Clojure

I saw implementation of the Graham scan mentioned in Real World Haskell as an exercise. I figured I’d have a go at doing it in Clojure, as it might prove useful for another project I was doing in a combination of Java and Clojure.

I’m mentioning this here as it really illustrated to me the power of Clojure basing itself on the JVM. During the development of this inherently graphical algorithm, I was able to quickly cobble together a Swing UI that showed what was going on. Working with the UI was very nearly as responsive as working at a REPL (indeed, UI elements were being drawn and redrawn as a result of commands from the REPL) and illustrated what was going on with the algorithm far better. For the first time ever I didn’t feel the dichotomy between dynamic and graphical ways of working with code.

I was also able to follow a very test-driven approach with test-is. This works so well at the REPL that it starts to make my work with C++ seem laughable.

The main body of the algorithm itself was something that seemed to fit well into Clojure’s loop construct as it’s inherently iterative rather than recursive. This was the first time I’ve done something where the loop seemed more natural than both the recursive solution and a for / while loop in C++. I tend to like declarations of intent in code, and listing the variables that are going to vary at the top of the loop seems like a sensible piece of discipline.

For what it’s worth, here’s the code of the main algorithm:

```(defn make-convex-hull
"Make a convex hull for a set of points, all of which are assumed to have positive x and y
coordinates. Returns a vector of lines, each of which is a sequence of two points"
[points]
(let [starting-point (find-starting-point points)
remaining-points (remove #(= % starting-point) points)
working-set (sort angle-comparator remaining-points)]
(loop [remaining-points working-set
p1 starting-point
p2 (first working-set)
p3 (second working-set)
hull []]
(if (= p3 (last working-set))
; We have reached the end of the set, return the full hull
(if (left-turn? p1 p2 p3)
(concat hull [[p1 p2] [p2 p3] [p3 starting-point]])
(concat hull [[p1 p3] [p3 starting-point]]))

; Does this form a left turn?
(if (left-turn? p1 p2 p3)
(recur (rest remaining-points) p2 p3 (nth remaining-points 2) (conj hull [p1 p2]))
(recur (rest remaining-points) p1 p3 (nth remaining-points 2) hull))))))```

You can also download a complete tarball. Note that this is a tarball of a darcs repository, so if you’re someone who might want to employ me you can see a full history of how I worked on the project. And if you’re a potential employer who knows how to use darcs, I might be interested in hearing from you. ðŸ™‚

Generally speaking there are way too many tutorials on monads on the web, though not many that are specific to Clojure (Konrad Hinsen’s tutorial being an obvious exception.) The argument as I’ve seen it described is that too many people figure out monads, then immediately attempt to explain their understanding of monads without realising that people have to figure things out in their own way.

This is an attempt to to something slightly different: to expose my thinking as I am in the process of learning things for myself, and before I get too comfortable with the ideas and set in my ways. It is, in other words, a monad tutorial written by a guy who knows nothing about monads. I don’t really know whether this will be helpful to other people or not, but I figured it was worth a shot.

The observant among you will probably get the impression that the log below isn’t written as I think it, but after a period of reflection. Nevertheless, I attempt to detail my thought processes without too much retrospective alteration.

So, monads then. Brian Beckmann wants me to think of them like function composition with extra type-decoration in some way. Konrad Hinsen makes them look a lot more like a sort of specialised let-binding, although as any avid lisper knows, let-bindings are little more than syntactic sugar for anonymous functions.

What strikes me immediately is how different monads look in Haskell from in Clojure. This is aggravated by me knowing almost nothing of Haskell. Monad programs in Haskell are littered with functions, commonly from inscrutable types to pairs of inscrutable types. In Clojure, they look an awful lot like let bindings, where most of work seems to be carried out in the binding with a trivial body.

Monads are available in Clojure contrib, so getting them hooked up is easy:

```(ns uk.co.asymptotic.mucking.about.with.monads

The most obvious thing to try is the identity monad, which is hopefully the simplest case to understand. This acts exactly like a let binding:

```(domonad
identity-m
[a 2
b 21]
(* a b))```

returns 42. This is exactly the same result as

```(let
[a 2
b 21]
(* a b))```

So far, so good. We could obviously attach some sorts of transformation functions to the bindings, but this can hardly be what all the fuss is about. Monads have to be more than a simple macro transformation to a let binding.

One of Konrad Hinsen’s examples looks interesting: a monad expression can be used to mimic the effect of a for expression as well as a let expression. This isn’t too far removed from a let expression, but something else is going on: the inner expression is being executed multiple times, and the eventual return value wraps these intermediate results into a list.

```(domonad
sequence-m
[a (range 1 5)
b (range 1 5)]
(* a b))

=> (1 2 3 4 2 4 6 8 3 6 9 12 4 8 12 16)```

So changing the monad in syntactically almost identical code has caused the inner expression to get evaluated multiple times. This hints at why monads are most naturally of use in functional languages, since we don’t care how many times something was executed, only about its result (since functions are pure).

It also occurs to me that changing the monad has kept the overall form the same, but changed the values I can work with in the “let-binding” part of the monadic expression. It looks like all domonad expressions are going to look similar, but I can’t freely swap monads in and out without paying attention to the types of the values in the binding expression. It’s not surprising that we don’t have complete latitude here, but for the monad technique to be anything but an obfuscation we must have some independence between the expression and the monad we apply to it.

It occurs to me that this last detail may be something that works out better in Haskell, with its more visible type system. There must be some relationship between the type of the monad and the types of the expressions I can write under that monad, but in a dynamically-typed language it seems far too much like trial and error. But then, I’ve always felt that about typing in dynamically-typed languages.

So we can cause functions to be evaluated multiple times with monads. We can also cause code not to be evaluated, via the frequently-described maybe monad. The maybe monad seems to go further, and allow us to interfere with the execution of functions that are within the binding part of the expression:

```(domonad
maybe-m
[a nil
b (/ 1 0)]
(+ a b))```

This returns nil, indicating that we never got as far as the division by zero. This begins to sound like quite a powerful idea â€” we can control which code is evaluated from outside of the code itself, simply by changing the monad. But how general is this idea? So far we’ve only succeeded on using it on code written in a rather unnatural let-binding style, and I’ve had to come up with a new case to use each monad. If code can’t be reused, then I may as well hard-code the logic and get on with my life.

Monad enthusiasts (doesn’t that sound like someone you don’t want to get stuck next to on a train?) often wax lyrical about how monads can be used to model things as diverse as state (in an otherwise stateless program), continuations and I/O. This is an appealing prospectâ€”these are all things that seem to need special rules and representations baked into the programming language, and it would be pleasing to see them all described within a single theoretical framework. However, there’s not much sign of that in the progress so far.

At this point I’m going to pick up on one of Brian Beckman’s examples, namely the state monad. This shows a hint of getting us towards continuations and other such goodies. In order to make any sense of Beckman’s examples I had to go away and spend a few days learning Haskell. I figured if I could rewrite his Haskell example in Clojure I had a shot at claiming I could understand what is going on.

So I start by defining a tree:

```(defstruct node :left :right)

(def
tree
(struct
node
(struct node :a :b)
(struct node (struct node :c :d) :e)))```

This isn’t going to win any awards for extensible Clojure code, but it’ll probably do the trick. I’m going to be using the implementation of the state monad as provided in Konrad Hinsen’s monad library (part of Clojure contrib), which gives me a couple of utility functions:

```(defn update-state [f]
(fn [s] (list s (f s))))

(defn set-state [s]
(update-state (fn [_] s)))

(defn fetch-state []
(update-state identity))```

The names and input types of these functions look like exactly what we want in order to do stateful programming. However, the return values are inscrutable. I request the state gets updated, and all that happens is I get a function passed back to me. What’s up with that?

Never mind, I’ll push ahead and write it in the most obvious way possible, following the structure of Beckman’s example:

```(defn label-tree [tree]
(if (map? tree)
state-m
[labelled-left (label-tree (:left tree))
labelled-right (label-tree (:right tree))]
(struct node labelled-left labelled-right))
state-m
[val (fetch-state)
_ (update-state #(+ 1 %))]
[tree val])))```

I’ve cribbed heavily from Beckman’s code, and it’s not very idiomatic as I’ve had to replace Haskell pattern matching with an if, and forced it into the domonad binding form. Furthermore, it’s not at all clear why the fetch-state and update-state calls work the way they do. Assuming the domonad form to be simple let-binding with some embellishment it makes no sense to have functions like fetch-state and friends that return functions.

I’ll leave understanding it until later. First to see whether it works:

```(label-tree tree)

That’s not what I expected. This looks like some sort of function object. But what is it expecting as an argument? Clearly I’ve not understood this. This isn’t what happened when I used the identity or maybe monads. Then I got a result straight back from the domonad call, not a function. Further hints that not everything in monad land is regular and substitutable.

Back to Beckman’s state monad tutorial. He emphasises that an instance of the state monad is a function from a state to a state-contents pair. Composing state monads gives more state monads, with successively more complex functions between state and state-contents pair, but always with the same signature. This is a hint. No matter how many I compose and how I compose them, I’ll always have an input that is demanding a state. This makes senseâ€”I defined a function to update the state from one node to the next, but I didn’t tell it where to start from. Putting the initial state in as a function parameter seems like a good idea:

```((label-tree tree) 0)

=> ({:left {:left [:a 0], :right [:b 1]}, :right {:left {:left [:c 2], :right [:d 3]}, :right [:e 4]}} 5)```

Victory is mine! The tree has been labelled, and the code did the right thing first time. Crucially, it seems in some sense more natural than the more obvious functional solution of passing the state around as an extra parameter in and a tuple result out of functions. The use of the monad looks like what you’d like to write in a stateful programming language, and the binding and passing happens transparently.

As a side note, the call to label-tree is obviously something that would look cleaner in Haskell than it does in Clojure, since in the former we can simply list the parameters as if we were calling a two-parameter function. In fact, the question of the exact difference between the monad call followed by binding the second parameter and a genuine two-parameter function is beyond my meagre understanding of Haskell. Obviously we could hide this syntactic issue by declaring label-tree as a 2-argument function that does the monad call and then applies the state, but it’s a slight let-down to find that this is necessary.

So it appears that the domonad call hasn’t actually done anything, but has produced a single heinously complicated function of what we would do when given a state. Crucially, the content of the tree itself has been built into the very structure of this function. The state travels through the function in a deterministic fashion and is permuted in various ways that are, by the time the monad has been instantiated, completely pre-determined.

I still have questions about monads. It still looks like a trick, albeit an elegant one. It’s not at all clear to me where using a monad will be preferable to simply writing idiomatic Clojure code. The domonad syntax still swears at me by looking entirely unlike ordinary Clojure code. Nevertheless, I feel I’m a step closer to understanding it all.