New versions of funsat and bitset

I wrote (and sometimes maintain) two Haskell programs: funsat and bitset. Funsat is a native haskell CDCL (conflict-driven clause-learning) SAT solver. Bitset is a small library for representing sets of items using bits under the hood (as opposed to trees, which is common in functional programming).

I just released version funsat 0.6.2 and bitset 1.1.

Funsat 0.6.2 has some compatibility changes so it now works with ghc 6.12. Also, it is cabal-installable. (Before, it would die because of stupid dependencies, or something.)

Bitset 1.1 now gives you access to the Integer that is used in the underlying bit representation.

Functional Priority Queues

In a previous post on my SAT solver I promised to post about a dead end using functional priority queues. I believe the most efficient data structure for the dynamic variable ordering should be a priority queue. In fact, it looks like a job for a Fibonacci heap (animation). According to CLRS, “Fibonacci heaps are especially desirable when the number of Extract-Min and Delete operations is small relative to the number of other operations performed.” At least if just a few decisions will lead to a conflict, then we might end up adjusting the priority of many variables (to handle the conflict) so that there’s more adjusting than making decisions.

The asymptotic complexity of the Fibonacci heap works out because of its mutable tree structure. Specifically, adjusting the key of a node in the heap requires the caller to provide a pointer to that node in the heap, and that node needs to be able to navigate anywhere else in the heap. This can be done in Haskell (either with the IO monad or the ST monad), but the result isn’t elegant by any means. It also feels wrong — I program in a functional language because it lets me do equational reasoning, and think about data dependencies rather than state machine models of computation.

Functional Fibonacci Heap?

I tried for a while to see if I could come up with a Zipper view of the heap instead of the traditional linked-lists-of-heap-ordered-trees approach. That is, maybe a zipper could approximate a pointer. What I’d really need is a zipper with arbitrarily-many points of access into the heap (instead of one), and I’ve no idea how to write that. But any update of the heap requires updating all these zippers, which seems to destroy the complexity of the Decrease-Key operation.

Another option is to implement Decrease-Key by searching for the node to update, instead of providing a pointer. But I couldn’t figure out a way to integrate searching within the structure of a Fibonacci heap.

Eventually I considered this a dead end and posted to comp.lang.functional, where Ben Franksen pointed me to finger trees.

Finger Trees

Finger trees are a functional data structure for persistent sequences. Ralf Hinze described a Haskell implementation of 2-3 finger trees which is available from Hackage, so I used this. In the paper Hinze even describes how to implement max-priority queues on top of finger trees, which seemed like a good idea at first. Unfortunately, I don’t think it admits an efficient Decrease-Key, so I used the paper’s description of ordered sequences instead. This seems like the right thing to do, given that the paper says:

Ordered sequences subsume priority queues, as we have immediate access to the smallest and the greatest element, and search trees, as we can partition ordered sequences in logarithmic time.

The ability to do search trees gives us an efficient (sub-linear) Decrease-Key. Finally, my priority queue operations are:

extractMax :: (Ord k) => Heap k a -> (Info k a, Heap k a)
extractMax (OrdSeq s) = (x, OrdSeq s')
where x :< s' = viewl s

increaseKey :: (Ord k, Eq a) => Info k a -> k -> Heap k a -> Heap k a
increaseKey oldInfo newKey (OrdSeq t) = OrdSeq (l' >< eqs' >< r')
where (l, r)    = split (<= measure oldInfo) t
(eqs, r') = split (< measure oldInfo) r
eqs' = foldr (\i t' -> if i == oldInfo then t' else i <| t')
FT.empty eqs
-- newInfo is bigger, so must fit on bigger side of split
(OrdSeq l') = insert newInfo (OrdSeq l)
newInfo = oldInfo{ key = newKey }



These implementations essentially do what I described in the previous section: implement a pointer approximation and integrate efficient searching. (Read the paper for more details.)

So What?

Well, I used finger trees in funsat. They were faster than whatever I was doing before.

Linear-time First UIP calculation

In a previous post I mentioned I was using a super-linear algorithm for calculating the first unique implication point (UIP) learned clause in funsat. The algorithm basically uses the definition of first UIP and requires calculating graph dominators of an explicitly-constructed conflict graph. By contrast, the linear-time algorithm described in the Minisat paper never explicitly constructs the graph, it merely inspects the trail in reverse, figuring out which literals should be inserted in the conflict clause using the reasons for each assignment. The former algorithm has the advantage that it implements what it means to calculate the first UIP clause; in other words, it’s easily seen as a correct implementation. The latter isn’t, but when it works, it’s faster and leaner.

The Minisat paper only gives lightly documented pseudocode for the algorithm. There are no data structure invariants nor proof of correctness. Here’s my attempt at explaining how and why it works.

The implication graph is described well in this handbook chapter. Basically, it is a directed graph in which the nodes are literals from the assignment, and an edge xy indicates the assignment x helped propagate the assignment y.

A UIP of an implication graph is a node at the current decision level d such that every path from the decision variable at level d to the conﬂict variable or its negation must go through it. Intuitively, it is a single reason at level d that causes the conﬂict. (This paragraph is from the same chapter.)

There may be many UIPs for the current decision level. The last decision variable is always a UIP. The first UIP is one with the shortest path to the conflict node.

From the UIP definition it is clear why graph dominators are involved: every UIP is a dominator of the conflict variables with respect to the last decision variable. My first implementation explicitly calculated those dominators, and chose the one closest to the conflict nodes.

Once the desired UIP is found, we have to calculate the corresponding learned clause. It turns out that good learned clauses correspond to cuts of the implication graph during a conflict (this is often called the conflict graph). The learned clause corresponding to a cut (S,T) is the set of nodes that are cut edge sources. Formally, this is the set $\{x \in S ~|~ \exists y \in T. x \rightarrow y \}$. To tie the knot, one only need know that the UIP u determines the cut (S,T) where T = $\{x ~|~ \text{there is a path of length at least 1 from } u \text{ to } x\}$. This information is sufficient to calculate the learned clause corresponding to a UIP.

The trail is the current assignment arranged in reverse chronological unit-propagation order (last assigned first out). The reason for a literal q is the set $\{x ~|~ \text{the edge } x \rightarrow q \text{ is part of the implication graph} \}$.

Algorithm

The algorithm outputs a learned clause (sequence of literals). There are a few important variables and conventions for the following pseudocode:

• p — Invariant: literal from the current decision level, initially the propagated literal that caused the conflict. The top of the trail is not p.
• c — Invariant: number of unprocessed but seen variables from current decision level, initially 0.
• We can mark a variable as seen. All variables are initially unseen.
• Every literal included in the learned clause has sign opposite what it does under the current assignment. (In the case of the conflicting literal, we include its negation.)

Onto the pseudocode:

Process literals starting with p until we process all the literals we see at the current decision level.
do
Process literal p:
foreach literal q in the reason for p
if var(q)1 is unseen
mark var(q) seen
if q is from the current decision level
increment c
else if q is from a lower decision level
add q to the learned clause

Select the next interesting literal to follow:
do
assign p to head of trail
while p is unseen
decrement c

while c > 0

By now, p is the first UIP node of the current decision level. Add the negation of p to the learned clause and output it.

1 var(x) is the variable corresponding to the literal x.

Correctness

The algorithm performs a backwards breadth-first search for the first UIP node. The trail is the BFS queue. The counter allows us to deduce when p is the closest dominator of the conflict variable. Recall that a node’s being seen means having been discovered as a reason during another node’s processing. At the bottom of the loop, the counter contains the number of unprocessed but seen nodes we know about which end a reverse path from the conflict variable backwards consisting only of current-level nodes (say it three times fast). When c reaches zero, it means there are no seen reverse paths back from the conflict node to the decision variable. Since p is from the current decision level, however, it must have a path from the decision variable. Therefore p must dominate all paths from the decision variable to the conflict variable. p is a UIP node. Moreover, since p is the first such node, it must be the first UIP node.

The first UIP learned clause is determined by the literals that cross the cut (S,T) determined by p, as indicated above. Every proper descendant of p is on the T side of the cut. Therefore, any lower-decision-level node must be on the S side of the cut. (If such a node x were on the T side, there would be a path from the decision node for d to x, and x would have level d, contradicting the assumption that x has a lower decision level.) The first such nodes encountered during traversal, as well as p, cross the cut. The algorithm includes exactly these variables in the learned clause.

Update 2011-12-24: Corrected algorithm to decrement c properly, as Peter reported in the comments.

While reflecting on how I might be able to improve my SAT solver, I discovered that my inner bottleneck loop contained two monad transformers (ErrorT and StateT) on top of a base monad (ST). The two transformers are provided by the Monad Transformer Library (MTL). According to the Haskell Wiki’s page on improving performance with monads:

MTL is an excellent library for programming with monads. However stacked monad transformers do not inline well and the library is in need of an optimization pass. As a result, it can often impose a performance hit of up to 300% (your code will run up to three times slower).

Err … I guess having stacked transformers is a Bad Idea for my inner loop. So I set out to improve the situation. On the same wiki page, there is a report of excellent speedups for the continuation-passing-style (CPS) approach (section 2 on the wiki). The idea is that you implement a custom monad manually combining the features you want, in CPS.

Accordingly, following the advice of that same wiki page, I implemented a new monad supporting state threading, errors, and ST actions, all in continuation-passing style:

> {-# LANGUAGE PolymorphicComponents
>             ,MultiParamTypeClasses
>             ,FunctionalDependencies
>             ,FlexibleInstances
>  #-}
>
> import Control.Monad.Error hiding ((>=>), forM_)
> import Control.Monad.State.Lazy hiding ((>=>), forM_)


Performing an ST action requires a kind of lifting.

> dpllST :: ST s a -> SSTErrMonad e st s a
> {-# INLINE dpllST #-}
> dpllST st = SSTErrMonad (\k s -> st >>= \x -> k x s)
>


SSTErrMonad e st s a: the error type e, state type st, ST thread
s and result type a.

> newtype SSTErrMonad e st s a =
>     SSTErrMonad { unSSTErrMonad :: forall r. (a -> (st -> ST s (Either e r, st)))
>                               -> (st -> ST s (Either e r, st)) }
>
>     return x = SSTErrMonad ($x) > m >>= f = bindSSTErrMonad m f > > bindSSTErrMonad :: SSTErrMonad e st s a -> (a -> SSTErrMonad e st s b) -> SSTErrMonad e st s b > {-# INLINE bindSSTErrMonad #-} > bindSSTErrMonad m f = SSTErrMonad (\k -> unSSTErrMonad m (\a -> unSSTErrMonad (f a) k)) > > instance MonadState st (SSTErrMonad e st s) where > get = SSTErrMonad (\k s -> k s s) > put s' = SSTErrMonad (\k _ -> k () s') > > instance (Error e) => MonadError e (SSTErrMonad e st s) where > throwError err = -- throw away continuation > SSTErrMonad (\_ s -> return (Left err, s)) > catchError action handler = SSTErrMonad > (\k s -> do (x, s') case x of > Left error -> unSSTErrMonad (handler error) k s' > Right result -> k result s')  The brilliant thing about this implementation is that the monadic bind operator >>= does no case analysis: only if you explicitly attempt to catch an error do you need to do case analysis. In contrast, the ErrorT implementation does: instance (Monad m, Error e) => Monad (ErrorT e m) where m >>= k = ErrorT$ do
a <- runErrorT m
case a of
Left  l -> return (Left l)
Right r -> runErrorT (k r)
...


The wiki page argues essentially that function calling (in my monad’s >>=) is less expensive than constant case analysis when errors are uncommon. And indeed, in my solver, they are uncommon (outside the inner loop, they are not possible).

Pretty Result Graphs; and, What did I do wrong?

But the result is a letdown:

(Graph produced using a Haskell Chart library, built on top of gtk2hs.)

The graph compares the runtime (in seconds) of the original code, in blue, with the new code, in red, on 52 benchmarks available from SATLIB.

The new monad only improves solving times slightly across my benchmarks. So, possibilities:

• I’ve implemented the monad incorrectly, which doesn’t seem likely since my many tests pass.
• I’ve implemented the monad correctly but inefficiently.
• The time spent doing case analysis isn’t significant compared with function invocation.

If anyone has a suggestion, please post it in the comments. I’ll definitely try it out.

P.S. Rest of SSTErrMonad, if you’re interested

> -- | @runSSTErrMonad m s@ executes a SSTErrMonad' action with initial state @s@
> -- until an error occurs or a result is returned.
> runSSTErrMonad :: (Error e) => SSTErrMonad e st s a -> (st -> ST s (Either e a, st))
> runSSTErrMonad m = unSSTErrMonad m (\x s -> return (return x, s))
>
> evalSSTErrMonad :: (Error e) => SSTErrMonad e st s a -> st -> ST s (Either e a)
> evalSSTErrMonad m s = do (result, _)                           return result


Update 17 May 2008, 1549: The graph was on too big a scale because of one point, so I removed the point and now the differences are more manifest.

A Modern SAT Solver in Haskell

Update 2009-03-09: The git clone url has been corrected. Please post a comment if it doesn’t work.

For my AI class project this semester I chose to write a modern SAT solver in Haskell. The SAT problem poses the question: is there a way to make a logical statement true by assigning its propositions to true (1) or false (0)? A SAT solver answers the question.

For example, $P \wedge Q$ and $((P \rightarrow Q) \rightarrow P) \rightarrow P$ are both logical statements using variables $P$ and $Q$ (note that $\wedge$ should be pronounced “and”, while $\rightarrow$ is “only if”). The first can be made true, or satisfied, by assigning $P = 1$ and $Q = 1$, and the second is actually true no matter the values of the variables (logically, this means it is valid).

In reality SAT problems are represented in clausal form, in which a formula becomes set of clauses, where each clause contains a choice of literals. A literal is a variable or its negation. In order to satisfy a clause, at least one of its literals must be true; and in order to satisfy a set of clauses, each clause must be satisfied.

Anyway, the point is that many real-world problems can be described using this sort of logical language such that a satisfying variable assignment can be mapped into a solution to the problem encoded in the clauses. Thus solving these constraint satisfaction problems can be automated provided a computer can discover a satisfying assignment efficiently. The only problem is the computer cannot, for every such problem, discover a solution efficiently. This phenomenon known as NP-completeness, which essentially groups certain problems as having similar inherent computational difficulty. It means all the problems you care about are prohibitively expensive to solve, in general. As far as we know.

Preamble: For anybody who’d like to look at the code, it’s available via git by running:

\$ git clone git://github.com/dbueno/funsat.git

There have been multiple obstacles to this project. I’m only aware of one other published attempt detailing the internals of a modern, DPLL SAT solver: Minisat. The paper was quite helpful, for the most part, especially with how to factor data structures. The other papers I read (Zhang et. al. on zchaff, cache performance, and clause learning, Lynce et. al. on data structures) gave me a good grasp of the engineering trade-offs, but little insight into how to translate that to an efficient implementation.

A DPLL-style SAT solver is composed of three essential steps: inference, decisions, and conflict analyses. The basic procedure loops the following until no more decisions can be made:

• infer as much as possible
• if you have deduced conflicting assignments,
• if you have made at least one decision, remove all inferences made after the last decision, record a reason for the conflict, and return to the top of the loop
• otherwise the problem is unsatisfiable
• there is no conflict, pick a variable and decide its value (0 or 1)

Inference in DPLL

Many modern solvers use a single inference rule, called unit propagation (UP). The rule is: whenever a clause has all false literals but one, that one literal must be true in order to satisfy the clause. As an example of this, suppose we have set $P = 1, Q = 1$ and we have the constraint $\neg P \vee \neg Q \vee R$. Then it must be that $R = 1$ since that constraint must be satisfied, and the other two literals are false.

The clause is called a unit clause since the variable $R$ is forced to assume a certain value based on the interactions of variables in the rest of the constraint.

The obvious way to implement UP is, after every assignment, to scan the problem clauses for unit clauses. This is prohibitively expensive when the number of clauses is large (as it is in the problems you care about), so, most modern SAT solvers do efficient unit propagation using the watched literals scheme. As the name suggests, we keep track of any two literals of each clause, and maintain a mapping from each watched literal to the clauses in which it occurs. When a literal becomes true, we visit each clause C in which its negation occurs: C now has one more false literal and might be unit; if so, by construction the other watched literal for C must be the unit literal, and so it is propagated. If C is not unit then we choose a new watched literal to replace the just-assigned one.

I implemented this in pretty much the same way it is implemented by minisat: an STUArray maps literals to watcher lists, that is, lists of clauses. Each time an assignment is made we visit the watcher lists, which costs us an array lookup and the length of the watcher list.

Decisions in DPLL

An easy decision strategy is simply to pick the first unassigned variable in the problem. We can do better without much work, though, by keeping track of the “activity” of each variable. (The activity is just a number, and bigger is better.) Whenever a variable occurs in the analysis of a conflict (see next section), we bump the activity of that variable by a constant. When it’s time to choose the next variable to decide, we choose the one with highest activity.

This naturally leads to needing a priority queue, which excited me because of my unnatural obsession with Fibonacci heaps. It seems like the perfect use case: the number of Extract-Min operations is (probably) low relative to the number of variable-bumping (Decrease-Key) operations needed. Unfortunately, as I will detail in a future post, there are serious problems implementing Fibonacci heaps in Haskell. I thought Finger Trees would help — and there was already a Haskell implementation — but they turned out to be expensive (again, I refer you for details to a post I haven’t yet written).

At the moment I simply keep an array mapping variables to their activities to achieve constant-time bumping, and when I need to choose a variable to decide, I simply pick the maximum-activity variable from a list of unassigned variables (linear time). For comparison, a Fibonacci heap can give you the maximum element in constant (amortised) time, so there is room for improvement here. The potential improvement, however, is tempered by the fact that deciding which variable to assign next is not even close to the main runtime bottleneck — that honorable estate is reserved for unit propagation.

Conflict Analysis in DPLL

A conflict occurs when unit propagation infers that a literal which is known to be true must be false. Logically, it’s a contradiction. Usually (if your problem is satisfiable) it means that you picked the wrong variable in a decision (i.e. assigned it to 1 when it should have been 0; and vice versa).

The original DPLL procedure did the obvious thing when a conflict occured: it reversed the most recent decision. If the last thing done was to assign $P$ to 1, it simply undid any consequences of assigning $P$ to 1, and set it to 0. Recently, as indicated by the papers in the introduction, there has been a slew of different techniques for clause learning — analysing exactly how the SAT solver produced the current conflict, then producing a new clause that is consistent with the problem, and the prevents the conflict from even occurring again. This new clause is called a learned clause.

This part of the solver is one of the hardest to get right, possibly because it is the most poorly-explained in the papers I’ve read. The Minisat paper gives actual code for producing the so-called First UIP clause (UIP = “unique implication point”) of the conflict, but that code contains nested loops, a counter variable named “counter”, and no invariants making it all but impenetrable. The best explanation of the theory is in this survey, but it has no code.

Since in general there are many learned clauses one can produce by analysing a conflict, one has to choose. The learned clause is produced by looking at the implication graph of the problem; a cut of that graph determines the learned clause. The First UIP clause has empirically been shown to be the best among common competitors, so I chose that one. That choice has proven painful.

There is a very clever algorithm for calculating the First UIP clause from a conflict graph using breadth-first search which takes linear time. Unfortunately, there is no rigorous description of it anywhere. My current implementation actually explicitly constructs the conflict graph and computes the First UIP cut by definition (which definition involves computing graph dominators). So, at least it’s correct. As soon as I can describe the First UIP linear-time algorithm precisely and implement it, I’ll post it for reference.

Current Thoughts

At the moment my solver isn’t too slow, but I’m not happy with the code. It feels like a C program written in Haskell (as it should, since much of it is adapted from Minisat). My main monad is StateT on top of ST (because I use STUArrays extensively).

There are glimmers of sanity: the explicit conflict-graph construction makes the implementation look like a specification; and the Functional Graph Library library included with GHC has support to output any graph in Graphviz format, which makes debugging relatively sane.

Perhaps using completely different data structures I would be able to come up with something as efficient but (1) shorter and (2) easier to test. I welcome any suggestions (“yeah your code pretty much sucks”) or pointers (“why didn’t you read this paper?”). The git repo contains a bunch of benchmark problems with which one can test the solver, if you’re interested.

Functional Pearl: Trees

[This post is Literate Haskell. If you copy the entire post into a file with a .lhs extension, you can load up that file in your favorite Haskell Interpreter, and it should work. Then you can test the functions mentioned herein.]

Given a binary tree (you know, a thing that’s either a leaf with some data or two branches, both of which are binary trees), can you compute a new binary tree with exactly the same structure, but with every leaf’s value replaced by the minimum value in the tree?

If you’re a programmer, of course you can. One simple pass to find the minimum value in the tree, and another to replace each leaf with the minimum value. Here is the data type and an implementation of this idea in Haskell.


> data Tree a = Leaf a          -- The leaf holds data of any type.
>             | Branch (Tree a) (Tree a) deriving (Show)

> replaceMin2 t = let m = findMin t
>                 in propagate t m
>   where
>     findMin (Leaf m) = m
>     findMin (Branch left right) = findMin left min findMin right

>     propagate (Leaf _) m = Leaf m
>     propagate (Branch left right) m = Branch (propagate left m)
>                                              (propagate right m)



Simple, and clear. Now, can you think of a way to do it in one pass? You might consider making one pass in which (1) for each leaf, replace its value with a mutable cell that can hold a value, returning the value of the leaf, then (2) for each branch, recursively compute the min on the left and right. At the end, you set the cell you gave to all the leaves to the value of the min you computed in this single pass, so that each leaf gets the value at once.

This is a good idea. However, this will change the type of the new tree. Instead of having a tree of integers, you’ll have a tree of cells-pointing-to-integers, and you’ll need another pass to get rid of them.

Here is a solution. One pass. Ponder it.


> replaceMin :: Tree Int -> Tree Int
> replaceMin t = let (t', m) = rpMin (t, m) in t'

> rpMin :: (Tree Int, Int) -> (Tree Int, Int)
> rpMin (Leaf a, m) = (Leaf m, a)
> rpMin (Branch l r, m) = let (l', ml) = rpMin (l, m)
>                             (r', mr) = rpMin (r, m)
>                    in (Branch l' r', ml min mr)



Note how in replaceMin the variable m is both bound to the result of calling rpMin and as an argument to rpMin. Totally insane.

That Doesn’t Work

I know; I said the same thing. The contract of rpMin is that it consumes a tuple of the tree and the actual minimum value of the tree (!), and returns (1) the tree with the minimum value stuck in the right places, and (2) the actual minimum value. If you’re used to imperative languages, or even functional languages without having used lazy evaluation, this contract is utter and complete nonsense. In fact, it is strong grounds for doubting a programmer’s competence.

The reason it is nonsense is because call-by-value parameter passing is most programmer’s default mindset. When a function is applied to its arguments, those arguments are computed down to some value of the appropriate type, and only then is the function invoked. This is the function application semantics of many programming languages, such as Java, C, Perl, etc.

The function replaceMin depends upon a strange and wonderful semantics called call-by-need semantics. In call-by-need semantics, when a function is invoked, the arguments are not evaluated. Only when, and if, the body of the function demands one of the arguments are they actually turned into a value. This fact makes replaceMin possible. Since nothing in rpMin actually tests the value of m (e.g. by comparing it to 0 or something), but simply uses it, we are allowed to refer to a value which cannot even by computed until rpMin is finished.

This means the algorithm given above essentially implements the cells idea suggested before, with at least two advantages: (1) it’s pure, meaning there is no way to step on our toes by assigning the wrong thing at the wrong time; (2) it really requires only one traversal, and doesn’t change the type of our function. It computes a new Tree of ints, just like the old one.

Acknowledgements

The function replaceMin` and helper comes from http://www.cse.ogi.edu/pacsoft/projects/rmb/, in particular under “Other Artifacts”, linked to by the bullet point, “Slides for a talk on the recursive do-notation.” I have omitted a link to the PDF on purpose, so you’ll see all the cool stuff those people have done.

Update 18 Feb 2008. As one person mentioned in the comments, this is quite obviously call-by-need, not call-by-name, semantics. I have changed it in the post to avoid further confusion.