# ICFP Contest 2008 — The One Liners

This year I participated in the International Conference on Functional Programming (ICFP) contest with a friend, Sooraj Bhat. Our team was The One Liners. As it is the fashion, and we found the exercise rewarding, here is our official ICFP 2008 post mortem.

We used git to manage our source (tarball). Here are some basic statistics:

## Introduction

The task was to write software to control a Mars rover and make it to home base, while avoiding three types of obstacles: boulders, craters, and martians. The rover would ricochet after hitting a boulder, fall into craters, and get captured by martians. Thus, you really want to try avoiding all obstacles if you can, although boulders are the least penalising. Our basic strategy was to accelerate toward home base unless there was an obstacle along that path. If there was an obstacle, we’d pick a point to the left or the right of it, and go there, preferring directions we were already facing.

We chose Haskell because we’re both functional programming nuts. It worked out well. A priori, I thought speed would be an issue, but, we never used more than 2% of our processor (both of us had dual core machines), whereas the server was close to 100% constantly. Our model of the world is basically a list of all known, static objects (boulders and craters). The collision avoidance code looks through this list, looking for the closest object to avoid. I know that some other teams had cleverer obstacle representations (like quadtrees), to get the nearest obstacle more quickly. But it appears not to have been necessary, at least on the maps provided by the contest organisers.

When the contest started, we got right to work. Sooraj went off to get lunch for three hours, and I prepared a script that would make a tarball of the repo from HEAD, and test each of the contest requirements (the README, bin/install, etc. are present and non-empty). I then wrote bin/install, which attempted to build our source using only the (paltry number of) packages available on the LiveCD.

Since one of the requirements was that bin/install should not attempt to write outside of the icfp08 directory, I planned to use the --package-db and --prefix Cabal flags to make it compile and install all the libraries to a place under icfp08. This was a great idea, except that every release of Cabal has bugs with this flag. (I did not realise this until several hours into trying this out.) Duncan Coutts (dcoutts on #haskell) was kind enough to apply a patch from the bleeding edge Cabal (1.5 branch) to the 1.4 branch for me, fixing the issue (I found out later it was only part of the issue, unfortunately). After a while, I stopped, confident I could probably figure it out later, and had better actually think about the problem.

Around 1330 EDT on Monday 14 July, we started preparing our submission. I again went to work trying to convince Cabal to install to and read from the right places. Every package built fine except network, which has some C glue code. For some reason I never figured out, Cabal wasn’t passing the -package-conf flag to GHC in this case.

In any case, 40 minutes before contest end, when I was feverishly trying to figure out why this wasn’t working, a message was posted saying that network package would be available. At that point I could have deleted 4 lines from bin/install and submitted, but I never saw the message. (Sooraj didn’t see it either. He was making cookies or something. Oh yeah, he wasn’t subscribed to the mailing list.) So, with great anguish, we didn’t actually submit anything.

In any case, we had a lot of fun, and here are our thoughts.

## Thoughts

#### What Worked

• Since Sooraj is in Atlanta and I’m in New York, we used webcams and Skype for communication, and we were rarely not in communication. We recommend microphones you don’t have to wear.
• For tricky code, working together on the same code helped noticeably, instead of on independent subtasks. It’s tempting to concentrate on the parallelism afforded by a team, but ours benefitted especially from synchronous activity.

For example, beeline was the name of our tactic which, given a desired end position, generated rover commands to control the steering and acceleration to send us roughly in that direction. I was working on this alone while Sooraj was working on other things, and both of us came up with mediocre half-solutions. When we started working together things started clicking, and we produced shorter, simpler, and correct code.

However, for straightforward stuff, parallelism worked well. For example, we needed code that would establish a TCP/IP connection and manage the incoming stream of messages, as well as send outgoing control messages. Also, we needed a parser that would take a message and turn it into a convenient Haskell data type. I worked on the former while Sooraj worked on the latter, and after an hour or two when we had finished, both had at most one bug.

• git-gui: Sooraj, aka, the four-hour-dinner man, had never used git. git-gui, a graphical front-end for interacting with a git repository, made using git relatively painless. Personally, I’ve always found GUIs for version control to be inhibiting, but git-gui gives me exactly what I want, 95% of the time. I used it, too.

• More testing tools: we later found out that other teams had come up with some neat tools for debugging and testing their rovers. One team even drew a map the same size as the rover’s, and on it had the rover’s heading vector and subgoal position marked. We should have made a similar effort in making the modify-test-feedback loop tighter. If we had, we would have found bugs earlier.
• This particular task we think would benefit from a higher-level control interface. We should have come up with several types of explicit goals which reflect a high level structure, and which some series of translations turns into tactics over time.

One way in which we did this right is by explicitly choosing a goal location (either the home base or to the side of the nearest obstacle) for the rover, and generating commands to meet that goal. This is done by what we called the sidestep planner. However, we do not do this for the acceleration of the rover — we mostly just accelerate as much as possible. We should have come up with a way to describe acceleration goals, and planned using those.

• Use a console logger that can separate output from different programmers. This way each developer can insert his own debugging messages without creating a bunch of noise for the others. It looks like hslogger would fit the bill nicely. Early on in development, we used the Haskell equivalent of printf(), because that was really all we needed. After our parsing and basic tactic infrastructure was in place, we really didn’t need a logger anymore, so we got rid of the output.
• Have a quantitative, automatic way to assess progress over time, e.g. performance/score graphs. We didn’t spend any time coming up with a way to store our results so that we could compare to them later. This will save you time in the end, because you can quickly reject approaches that aren’t working.

For posterity.

=== ICFP 2008 Contest Submission README ===

Team: The One Liners
Compiler: GHC 6.8.2

== Third Party Libraries ==

bytestring-0.9.1.0  --  Fast strings
Cabal-1.4.0.2       --  Build/package manager
network-2.1.0.0     --  Network facilities
parsec-3.0.0        --  Parser combinators

== Overview of the modules / our strategy ==

Main --

Controller -- This contains our main logic.  The basic structure
consists of a a low-level routine who is responsible for
navigating to a specified location, and a high-level routine who
is responsible for choosing subgoal locations to go to.

Controller.Util -- Generically useful support routines for our
controller strategies.

Controller.World -- Any state that the controller wishes to
persist between runs.

Protocol -- Basic datastructures that hold information from the
messages.

Protocol.Wire -- Routines to read/write the messages from/to
the network.

Data.Vector -- Vectors in R^2


* — Warning: all denigrating comments toward Sooraj are more true sarcastic than they appear.

# A Modern SAT Solver in Haskell

Update 2009-03-09: The git clone url has been corrected. Please post a comment if it doesn’t work.

For my AI class project this semester I chose to write a modern SAT solver in Haskell. The SAT problem poses the question: is there a way to make a logical statement true by assigning its propositions to true (1) or false (0)? A SAT solver answers the question.

For example, $P \wedge Q$ and $((P \rightarrow Q) \rightarrow P) \rightarrow P$ are both logical statements using variables $P$ and $Q$ (note that $\wedge$ should be pronounced “and”, while $\rightarrow$ is “only if”). The first can be made true, or satisfied, by assigning $P = 1$ and $Q = 1$, and the second is actually true no matter the values of the variables (logically, this means it is valid).

In reality SAT problems are represented in clausal form, in which a formula becomes set of clauses, where each clause contains a choice of literals. A literal is a variable or its negation. In order to satisfy a clause, at least one of its literals must be true; and in order to satisfy a set of clauses, each clause must be satisfied.

Anyway, the point is that many real-world problems can be described using this sort of logical language such that a satisfying variable assignment can be mapped into a solution to the problem encoded in the clauses. Thus solving these constraint satisfaction problems can be automated provided a computer can discover a satisfying assignment efficiently. The only problem is the computer cannot, for every such problem, discover a solution efficiently. This phenomenon known as NP-completeness, which essentially groups certain problems as having similar inherent computational difficulty. It means all the problems you care about are prohibitively expensive to solve, in general. As far as we know.

Preamble: For anybody who’d like to look at the code, it’s available via git by running:

\$ git clone git://github.com/dbueno/funsat.git

There have been multiple obstacles to this project. I’m only aware of one other published attempt detailing the internals of a modern, DPLL SAT solver: Minisat. The paper was quite helpful, for the most part, especially with how to factor data structures. The other papers I read (Zhang et. al. on zchaff, cache performance, and clause learning, Lynce et. al. on data structures) gave me a good grasp of the engineering trade-offs, but little insight into how to translate that to an efficient implementation.

A DPLL-style SAT solver is composed of three essential steps: inference, decisions, and conflict analyses. The basic procedure loops the following until no more decisions can be made:

• infer as much as possible
• if you have deduced conflicting assignments,
• if you have made at least one decision, remove all inferences made after the last decision, record a reason for the conflict, and return to the top of the loop
• otherwise the problem is unsatisfiable
• there is no conflict, pick a variable and decide its value (0 or 1)

## Inference in DPLL

Many modern solvers use a single inference rule, called unit propagation (UP). The rule is: whenever a clause has all false literals but one, that one literal must be true in order to satisfy the clause. As an example of this, suppose we have set $P = 1, Q = 1$ and we have the constraint $\neg P \vee \neg Q \vee R$. Then it must be that $R = 1$ since that constraint must be satisfied, and the other two literals are false.

The clause is called a unit clause since the variable $R$ is forced to assume a certain value based on the interactions of variables in the rest of the constraint.

The obvious way to implement UP is, after every assignment, to scan the problem clauses for unit clauses. This is prohibitively expensive when the number of clauses is large (as it is in the problems you care about), so, most modern SAT solvers do efficient unit propagation using the watched literals scheme. As the name suggests, we keep track of any two literals of each clause, and maintain a mapping from each watched literal to the clauses in which it occurs. When a literal becomes true, we visit each clause C in which its negation occurs: C now has one more false literal and might be unit; if so, by construction the other watched literal for C must be the unit literal, and so it is propagated. If C is not unit then we choose a new watched literal to replace the just-assigned one.

I implemented this in pretty much the same way it is implemented by minisat: an STUArray maps literals to watcher lists, that is, lists of clauses. Each time an assignment is made we visit the watcher lists, which costs us an array lookup and the length of the watcher list.

## Decisions in DPLL

An easy decision strategy is simply to pick the first unassigned variable in the problem. We can do better without much work, though, by keeping track of the “activity” of each variable. (The activity is just a number, and bigger is better.) Whenever a variable occurs in the analysis of a conflict (see next section), we bump the activity of that variable by a constant. When it’s time to choose the next variable to decide, we choose the one with highest activity.

This naturally leads to needing a priority queue, which excited me because of my unnatural obsession with Fibonacci heaps. It seems like the perfect use case: the number of Extract-Min operations is (probably) low relative to the number of variable-bumping (Decrease-Key) operations needed. Unfortunately, as I will detail in a future post, there are serious problems implementing Fibonacci heaps in Haskell. I thought Finger Trees would help — and there was already a Haskell implementation — but they turned out to be expensive (again, I refer you for details to a post I haven’t yet written).

At the moment I simply keep an array mapping variables to their activities to achieve constant-time bumping, and when I need to choose a variable to decide, I simply pick the maximum-activity variable from a list of unassigned variables (linear time). For comparison, a Fibonacci heap can give you the maximum element in constant (amortised) time, so there is room for improvement here. The potential improvement, however, is tempered by the fact that deciding which variable to assign next is not even close to the main runtime bottleneck — that honorable estate is reserved for unit propagation.

## Conflict Analysis in DPLL

A conflict occurs when unit propagation infers that a literal which is known to be true must be false. Logically, it’s a contradiction. Usually (if your problem is satisfiable) it means that you picked the wrong variable in a decision (i.e. assigned it to 1 when it should have been 0; and vice versa).

The original DPLL procedure did the obvious thing when a conflict occured: it reversed the most recent decision. If the last thing done was to assign $P$ to 1, it simply undid any consequences of assigning $P$ to 1, and set it to 0. Recently, as indicated by the papers in the introduction, there has been a slew of different techniques for clause learning — analysing exactly how the SAT solver produced the current conflict, then producing a new clause that is consistent with the problem, and the prevents the conflict from even occurring again. This new clause is called a learned clause.

This part of the solver is one of the hardest to get right, possibly because it is the most poorly-explained in the papers I’ve read. The Minisat paper gives actual code for producing the so-called First UIP clause (UIP = “unique implication point”) of the conflict, but that code contains nested loops, a counter variable named “counter”, and no invariants making it all but impenetrable. The best explanation of the theory is in this survey, but it has no code.

Since in general there are many learned clauses one can produce by analysing a conflict, one has to choose. The learned clause is produced by looking at the implication graph of the problem; a cut of that graph determines the learned clause. The First UIP clause has empirically been shown to be the best among common competitors, so I chose that one. That choice has proven painful.

There is a very clever algorithm for calculating the First UIP clause from a conflict graph using breadth-first search which takes linear time. Unfortunately, there is no rigorous description of it anywhere. My current implementation actually explicitly constructs the conflict graph and computes the First UIP cut by definition (which definition involves computing graph dominators). So, at least it’s correct. As soon as I can describe the First UIP linear-time algorithm precisely and implement it, I’ll post it for reference.

## Current Thoughts

At the moment my solver isn’t too slow, but I’m not happy with the code. It feels like a C program written in Haskell (as it should, since much of it is adapted from Minisat). My main monad is StateT on top of ST (because I use STUArrays extensively).

There are glimmers of sanity: the explicit conflict-graph construction makes the implementation look like a specification; and the Functional Graph Library library included with GHC has support to output any graph in Graphviz format, which makes debugging relatively sane.

Perhaps using completely different data structures I would be able to come up with something as efficient but (1) shorter and (2) easier to test. I welcome any suggestions (“yeah your code pretty much sucks”) or pointers (“why didn’t you read this paper?”). The git repo contains a bunch of benchmark problems with which one can test the solver, if you’re interested.

# Functional Pearl: Trees

[This post is Literate Haskell. If you copy the entire post into a file with a .lhs extension, you can load up that file in your favorite Haskell Interpreter, and it should work. Then you can test the functions mentioned herein.]

Given a binary tree (you know, a thing that’s either a leaf with some data or two branches, both of which are binary trees), can you compute a new binary tree with exactly the same structure, but with every leaf’s value replaced by the minimum value in the tree?

If you’re a programmer, of course you can. One simple pass to find the minimum value in the tree, and another to replace each leaf with the minimum value. Here is the data type and an implementation of this idea in Haskell.


> data Tree a = Leaf a          -- The leaf holds data of any type.
>             | Branch (Tree a) (Tree a) deriving (Show)

> replaceMin2 t = let m = findMin t
>                 in propagate t m
>   where
>     findMin (Leaf m) = m
>     findMin (Branch left right) = findMin left min findMin right

>     propagate (Leaf _) m = Leaf m
>     propagate (Branch left right) m = Branch (propagate left m)
>                                              (propagate right m)



Simple, and clear. Now, can you think of a way to do it in one pass? You might consider making one pass in which (1) for each leaf, replace its value with a mutable cell that can hold a value, returning the value of the leaf, then (2) for each branch, recursively compute the min on the left and right. At the end, you set the cell you gave to all the leaves to the value of the min you computed in this single pass, so that each leaf gets the value at once.

This is a good idea. However, this will change the type of the new tree. Instead of having a tree of integers, you’ll have a tree of cells-pointing-to-integers, and you’ll need another pass to get rid of them.

Here is a solution. One pass. Ponder it.


> replaceMin :: Tree Int -> Tree Int
> replaceMin t = let (t', m) = rpMin (t, m) in t'

> rpMin :: (Tree Int, Int) -> (Tree Int, Int)
> rpMin (Leaf a, m) = (Leaf m, a)
> rpMin (Branch l r, m) = let (l', ml) = rpMin (l, m)
>                             (r', mr) = rpMin (r, m)
>                    in (Branch l' r', ml min mr)



Note how in replaceMin the variable m is both bound to the result of calling rpMin and as an argument to rpMin. Totally insane.

### That Doesn’t Work

I know; I said the same thing. The contract of rpMin is that it consumes a tuple of the tree and the actual minimum value of the tree (!), and returns (1) the tree with the minimum value stuck in the right places, and (2) the actual minimum value. If you’re used to imperative languages, or even functional languages without having used lazy evaluation, this contract is utter and complete nonsense. In fact, it is strong grounds for doubting a programmer’s competence.

The reason it is nonsense is because call-by-value parameter passing is most programmer’s default mindset. When a function is applied to its arguments, those arguments are computed down to some value of the appropriate type, and only then is the function invoked. This is the function application semantics of many programming languages, such as Java, C, Perl, etc.

The function replaceMin depends upon a strange and wonderful semantics called call-by-need semantics. In call-by-need semantics, when a function is invoked, the arguments are not evaluated. Only when, and if, the body of the function demands one of the arguments are they actually turned into a value. This fact makes replaceMin possible. Since nothing in rpMin actually tests the value of m (e.g. by comparing it to 0 or something), but simply uses it, we are allowed to refer to a value which cannot even by computed until rpMin is finished.

This means the algorithm given above essentially implements the cells idea suggested before, with at least two advantages: (1) it’s pure, meaning there is no way to step on our toes by assigning the wrong thing at the wrong time; (2) it really requires only one traversal, and doesn’t change the type of our function. It computes a new Tree of ints, just like the old one.

### Acknowledgements

The function replaceMin and helper comes from http://www.cse.ogi.edu/pacsoft/projects/rmb/, in particular under “Other Artifacts”, linked to by the bullet point, “Slides for a talk on the recursive do-notation.” I have omitted a link to the PDF on purpose, so you’ll see all the cool stuff those people have done.

Update 18 Feb 2008. As one person mentioned in the comments, this is quite obviously call-by-need, not call-by-name, semantics. I have changed it in the post to avoid further confusion.