In a previous post I mentioned I was using a super-linear algorithm for calculating the first unique implication point (UIP) learned clause in funsat. The algorithm basically uses the definition of first UIP and requires calculating graph dominators of an explicitly-constructed conflict graph. By contrast, the linear-time algorithm described in the Minisat paper never explicitly constructs the graph, it merely inspects the trail in reverse, figuring out which literals should be inserted in the conflict clause using the reasons for each assignment. The former algorithm has the advantage that it implements what it means to calculate the first UIP clause; in other words, it’s easily seen as a correct implementation. The latter isn’t, but when it works, it’s faster and leaner.
The Minisat paper only gives lightly documented pseudocode for the algorithm. There are no data structure invariants nor proof of correctness. Here’s my attempt at explaining how and why it works.
The implication graph is described well in this handbook chapter. Basically, it is a directed graph in which the nodes are literals from the assignment, and an edge x → y indicates the assignment x helped propagate the assignment y.
A UIP of an implication graph is a node at the current decision level d such that every path from the decision variable at level d to the conﬂict variable or its negation must go through it. Intuitively, it is a single reason at level d that causes the conﬂict. (This paragraph is from the same chapter.)
There may be many UIPs for the current decision level. The last decision variable is always a UIP. The first UIP is one with the shortest path to the conflict node.
From the UIP definition it is clear why graph dominators are involved: every UIP is a dominator of the conflict variables with respect to the last decision variable. My first implementation explicitly calculated those dominators, and chose the one closest to the conflict nodes.
Once the desired UIP is found, we have to calculate the corresponding learned clause. It turns out that good learned clauses correspond to cuts of the implication graph during a conflict (this is often called the conflict graph). The learned clause corresponding to a cut (S,T) is the set of nodes that are cut edge sources. Formally, this is the set . To tie the knot, one only need know that the UIP u determines the cut (S,T) where T = . This information is sufficient to calculate the learned clause corresponding to a UIP.
The trail is the current assignment arranged in reverse chronological unit-propagation order (last assigned first out). The reason for a literal q is the set .
The algorithm outputs a learned clause (sequence of literals). There are a few important variables and conventions for the following pseudocode:
- p — Invariant: literal from the current decision level, initially the propagated literal that caused the conflict. The top of the trail is not p.
- c — Invariant: number of unprocessed but seen variables from current decision level, initially 0.
- We can mark a variable as seen. All variables are initially unseen.
- Every literal included in the learned clause has sign opposite what it does under the current assignment. (In the case of the conflicting literal, we include its negation.)
Onto the pseudocode:
Process literals starting with p until we process all the literals we see at the current decision level.
Process literal p:
foreach literal q in the reason for p
if var(q)1 is unseen
mark var(q) seen
if q is from the current decision level
else if q is from a lower decision level
add q to the learned clause
Select the next interesting literal to follow:
assign p to head of trail
undo head of trail
while p is unseen
while c > 0
By now, p is the first UIP node of the current decision level. Add the negation of p to the learned clause and output it.
1 var(x) is the variable corresponding to the literal x.
The algorithm performs a backwards breadth-first search for the first UIP node. The trail is the BFS queue. The counter allows us to deduce when p is the closest dominator of the conflict variable. Recall that a node’s being seen means having been discovered as a reason during another node’s processing. At the bottom of the loop, the counter contains the number of unprocessed but seen nodes we know about which end a reverse path from the conflict variable backwards consisting only of current-level nodes (say it three times fast). When c reaches zero, it means there are no seen reverse paths back from the conflict node to the decision variable. Since p is from the current decision level, however, it must have a path from the decision variable. Therefore p must dominate all paths from the decision variable to the conflict variable. p is a UIP node. Moreover, since p is the first such node, it must be the first UIP node.
The first UIP learned clause is determined by the literals that cross the cut (S,T) determined by p, as indicated above. Every proper descendant of p is on the T side of the cut. Therefore, any lower-decision-level node must be on the S side of the cut. (If such a node x were on the T side, there would be a path from the decision node for d to x, and x would have level d, contradicting the assumption that x has a lower decision level.) The first such nodes encountered during traversal, as well as p, cross the cut. The algorithm includes exactly these variables in the learned clause.
Update 2011-12-24: Corrected algorithm to decrement c properly, as Peter reported in the comments.