Design Analysis Of Algorithm

The Bellman-ford Algorithm

4 min read
The Bellman-Ford algorithm: The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights may be negative. Given a weighted, directed graph G = (V, E) with source s and weight function w : E R, the Bellman-Ford algorithm returns a boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source. If there is such a cycle, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights. The algorithm uses relaxation, progressively decreasing an estimate d[v] on the weight of a shortest path from the source s to each vertex v V until it achieves the actual shortest-path weight δ(s, v). The algorithm returns TRUE if and only if the graph contains no negative-weight cycles that are reachable from the source. Figure 24.4 shows the execution of the Bellman-Ford algorithm on a graph with 5 vertices. After initializing the d and π values of all vertices in line 1, the algorithm makes |V| - 1 passes over the edges of the graph. Each pass is one iteration of the for loop of lines 2-4 and consists of relaxing each edge of the graph once. Figures 24.4(b)-(e) show the state of the algorithm after each of the four passes over the edges. After making |V|- 1 passes, lines 5-8 check for a negative-weight cycle and return the appropriate boolean value. (We'll see a little later why this check works.)

Finding The Closest Pair Of Points

5 min read
We next show that at most 8 points of P can reside within this δ × 2δ rectangle. Consider the δ × δ square forming the left half of this rectangle. Since all points within PL are at least δ units apart, at most 4 points can reside within this square; Figure 33.11(b) shows how. Similarly, at most 4 points in PR can reside within the δ × δ square forming the right half of the rectangle. Thus, at most 8 points of P can reside within the δ × 2δ rectangle. (Note that since points on line l may be in either PL or PR, there may be up to 4 points on l. This limit is achieved if there are two pairs of coincident points such that each pair consists of one point from PL and one point from PR, one pair is at the intersection of l and the top of the rectangle, and the other pair is where l intersects the bottom of the rectangle.) Having shown that at most 8 points of P can reside within the rectangle, it is easy to see that we need only check the 7 points following each point in the array Y.Still assuming that the closest pair is pL and pR, let us assume without loss of generality that pL precedes pR in array Y. Then, even if pL occurs as early as possible in Y and pR occurs as late as possible, pR is in one of the 7 positions following pL . Thus, we have shown the correctness of the closest-pair algorithm. Implementation and running time: As we have noted, our goal is to have the recurrence for the running time be T(n) = 2T(n/2) O(n), where T(n) is the running time for a set of n points. The main difficulty is in ensuring that the arrays XL, XR, YL, and YR, which are passed to recursive calls, are sorted by the proper coordinate and also that the array Y is sorted by y-coordinate. (Note that if the array X that is received by a recursive call is already sorted, then the division of set P into PL and PR is easily accomplished in linear time.)

A Merging Network

4 min read
A merging network: Our sorting network will be constructed from merging networks, which are networks that can merge two sorted input sequences into one sorted output sequence. We modify BITONIC-SORTER[n] to create the merging network MERGER[n]. As with the bitonic sorter, we shall prove the correctness of the merging network only for inputs that are zero-one sequences. The merging network is based on the following intuition. Given two sorted sequences, if we reverse the order of the second sequence and then concatenate the two sequences, the resulting sequence is bitonic. For example, given the sorted zero-one sequences X = 00000111 and Y = 00001111, we reverse Y to get YR= 11110000. Concatenating X and YR yields 0000011111110000, which is bitonic. Thus, to merge the two input sequences X and Y, it suffices to perform a bitonic sort on X concatenated with YR. We can construct MERGER[n] by modifying the first half-cleaner of BITONIC- SORTER[n]. The key is to perform the reversal of the second half of the inputs implicitly. Given two sorted sequences a1, a2,...,an/2 and an/2 1, an/2 2,..., an to be merged, we want the effect of bitonically sorting the sequence a1, a2,..., an/2, an, an-1,..., an/2 1. Since the first half-cleaner of BITONIC-SORTER[n] compares inputs i and n/2 i, for i = 1, 2,..., n/2, we make the first stage of the merging network compare inputs i and n - i 1. Figure 27.10 shows the correspondence. The only subtlety is that the order of the outputs from the bottom of the first stage of MERGER[n] are reversed compared with the order of outputs from an ordinary half-cleaner. Since the reversal of a bitonic sequence is bitonic.

Maximum Bipartite Matching

5 min read
Maximum bipartite matching: Some combinatorial problems can easily be cast as maximum-flow problems. The multiple-source, multiple-sink maximum-flow problem from Section 26.1 gave us one example. There are other combinatorial problems that seem on the surface to have little to do with flow networks, but can in fact be reduced to maximum-flow problems. This section presents one such problem: finding a maximum matching in a bipartite graph (see Section B.4). In order to solve this problem, we shall take advantage of an integrality property provided by the Ford-Fulkerson method. We shall also see that the Ford-Fulkerson method can be made to solve the maximum-bipartite-matching problem on a graph G = (V, E) in O(V E) time. Given an undirected graph G = (V, E), a matching is a subset of edges M E such that for all vertices v V, at most one edge of M is incident on v. We say that a vertex v V is matched by matching M if some edge in M is incident on v; otherwise, v is unmatched. A maximum matching is a matching of maximum cardinality, that is, a matching M such that for any matching M', we have |M| |M'|. In this section, we shall restrict our attention to finding maximum matchings in bipartite graphs. We assume that the vertex set can be partitioned into V = L R, where L and R are disjoint and all edges in E go between L and R. We further assume that every vertex in V has at least one incident edge. Figure 26.7 illustrates the notion of a matching.

Breadth-first Search

4 min read
Before proving the various properties of breadth-first search, we take on the somewhat easier job of analyzing its running time on an input graph G = (V, E). We use aggregate analysis, as we saw in Aggregate analysis. After initialization, no vertex is ever whitened, and thus the test in line 13 ensures that each vertex is enqueued at most once, and hence dequeued at most once. The operations of enqueuing and dequeuing take O(1) time, so the total time devoted to queue operations is O(V). Because the adjacency list of each vertex is scanned only when the vertex is dequeued, each adjacency list is scanned at most once. Since the sum of the lengths of all the adjacency lists is Θ(E), the total time spent in scanning adjacency lists is O(E). The overhead for initialization is O(V), and thus the total running time of BFS is O(V E). Thus, breadth-first search runs in time linear in the size of the adjacency-list representation of G. At the beginning of this section, we claimed that breadth-first search finds the distance to each reachable vertex in a graph G = (V, E) from a given source vertex s V. Define the shortest-path distance δ(s, v) from s to v as the minimum number of edges in any path from vertex s to vertex v; if there is no path from s to v, then δ(s, v) = . A path of length δ(s, v) from s to v is said to be a shortest path from s to v. Before showing that breadth-first search actually computes shortest-path distances, we investigate an important property of shortest-path distances. The procedure BFS builds a breadth-first tree as it searches the graph, as illustrated in Figure 22.3. The tree is represented by the π field in each vertex. More formally, for a graph G = (V, E) with source s, we define the predecessor subgraph of G as Gπ = (Vπ, Eπ), where

Representations Of Graphs

6 min read
Representations of graphs: There are two standard ways to represent a graph G = (V, E): as a collection of adjacency lists or as an adjacency matrix. Either way is applicable to both directed and undirected graphs. The adjacency-list representation is usually preferred, because it provides a compact way to represent sparse graphs-those for which |E| is much less than |V|2. Most of the graph algorithms presented in this book assume that an input graph is represented in adjacency-list form. An adjacency-matrix representation may be preferred, however, when the graph is dense-|E| is close to |V|2-or when we need to be able to tell quickly if there is an edge connecting two given vertices. For example, two of the all-pairs shortest-paths algorithms presented in All-Pairs Shortest Paths assume that their input graphs are represented by adjacency matrices. The adjacency-list representation of a graph G = (V, E) consists of an array Adj of |V| lists, one for each vertex in V . For each u V, the adjacency list Adj[u] contains all the vertices v such that there is an edge (u, v) E. That is, Adj[u] consists of all the vertices adjacent to u in G. (Alternatively, it may contain pointers to these vertices.) The vertices in each adjacency list are typically stored in an arbitrary order. Figure 22.1(b) is an adjacency-list representation of the undirected graph in Figure 22.1(a). Similarly, Figure 22.2(b) is an adjacency-list representation of the directed graph in Figure 22.2(a).

The Knuth-morris-pratt Algorithm

5 min read
The Knuth-Morris-Pratt matching algorithm is given in pseudocode below as the procedure KMP-MATCHER. It is mostly modeled after FINITE-AUTOMATON-MATCHER, as we shall see. KMP-MATCHER calls the auxiliary procedure COMPUTE-PREFIX-FUNCTION to compute π. We begin with an analysis of the running times of these procedures. Proving these procedures correct will be more complicated. The running time of COMPUTE-PREFIX-FUNCTION is Θ(m), using the potential method of amortized analysis. We associate a potential of k with the current state k of the algorithm. This potential has an initial value of 0, by line 3. Line 6 decreases k whenever it is executed, since π[k] < k. Since π[k] 0 for all k, however, k can never become negative. The only other line that affects k is line 8, which increases k by at most one during each execution of the for loop body. Since k < q upon entering the for loop, and since q is incremented in each iteration of the for loop body, k < q always holds. (This justifies the claim that π[q] < q as well, by line 9.) We can pay for each execution of the while loop body on line 6 with the corresponding decrease in the potential function, since π[k] < k. Line 8 increases the potential function by at most one, so that the amortized cost of the loop body on lines 5-9 is O(1). Since the number of outer-loop iterations is Θ(m), and since the final potential function is at least as great as the initial potential function, the total actual worst-case running time of COMPUTE-PREFIX-FUNCTION is Θ(m).

Polynomial Time

8 min read
Polynomial time: NP-completeness problems are generally regarded as tractable, but for philosophical, not mathematical, reasons. It can offer three supporting arguments. Abstract problems: To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a "problem" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder.

Comparison Networks

3 min read
Under the assumption that each comparator takes unit time, we can define the "running time" of a comparison network, that is, the time it takes for all the output wires to receive their values once the input wires receive theirs. Informally, this time is the largest number of comparators that any input element can pass through as it travels from an input wire to an output wire. More formally, we define the depth of a wire as follows. An input wire of a comparison network has depth 0. Now, if a comparator has two input wires with depths dx and dy, then its output wires have depth max(dx, dy) 1. Because there are no cycles of comparators in a comparison network, the depth of a wire is well defined, and we define the depth of a comparator to be the depth of its output wires. Figure 27.2 shows comparator depths. The depth of a comparison network is the maximum depth of an output wire or, equivalently, the maximum depth of a comparator. The comparison network of Figure 27.2, for example, has depth 3 because comparator E has depth 3. If each comparator takes one time unit to produce its output value, and if network inputs appear at time 0, a comparator at depth d produces its outputs at time d; the depth of the network therefore equals the time for the network to produce values at all of its output wires. A sorting network is a comparison network for which the output sequence is monotonically increasing (that is, b1 b2 ··· bn) for every input sequence. Of course, not every comparison network is a sorting network, but the network of Figure 27.2 is. To see why, observe that after time 1, the minimum of the four input values has been produced by either the top output of comparator A or the top output of comparator B. After time 2, therefore, it must be on the top output of comparator C. A symmetrical argument shows that after time 2, the maximum of the four input values has been produced by the bottom output of comparator D. All that remains is for comparator E to ensure that the middle two values occupy their correct output positions, which happens at time 3. A comparison network is like a procedure in that it specifies how comparisons are to occur, but it is unlike a procedure in that its size-the number of comparators that it contains-depends on the number of inputs and outputs. Therefore, we shall actually be describing "families" of comparison networks. For example, the goal of this chapter is to develop a family SORTER of efficient sorting networks. We specify a given network within a family by the family name and the number of inputs (which equals the number of outputs). For example, the n-input, n-output sorting network in the family SORTER is named SORTER[n].

Johnson's Algorithm For Sparse Graphs

3 min read
Johnson's algorithm to compute all-pairs shortest paths uses the Bellman-Ford algorithm and Dijkstra's algorithm as subroutines. It assumes that the edges are stored in adjacency lists. The algorithm returns the usual |V| × |V| matrix D = dij, where dij = δ(i, j), or it reports that the input graph contains a negative-weight cycle. As is typical for an all-pairs shortest-paths algorithm, we assume that the vertices are numbered from 1 to |V|. This code simply performs the actions we specified earlier. Line 1 produces G. Line 2 runs the Bellman-Ford algorithm on G with weight function w and source vertex s. If G, and hence G, contains a negative-weight cycle, line 3 reports the problem. Lines 4-11 assume that G contains no negative-weight cycles. Lines 4-5 set h(v) to the shortest-path weight δ(s, v) computed by the Bellman-Ford algorithm for all v V. Lines 6-7 compute the new weights . For each pair of vertices u, v V , the for loop of lines 8-11 computes the shortest-path weight by calling Dijkstra's algorithm once from each vertex in V. Line 11 stores in matrix entry duv the correct shortest-path weight δ(u, v), calculated using equation (25.10). Finally, line 12 returns the completed D matrix. Figure 25.6 shows the execution of Johnson's algorithm. If the min-priority queue in Dijkstra's algorithm is implemented by a Fibonacci heap, the running time of Johnson's algorithm is O(V2 lg V V E). The simpler binary min-heap implementation yields a running time of O(V E lg V), which is still asymptotically faster than the Floyd-Warshall algorithm if the graph is sparse.

The Floyd-warshall Algorithm

5 min read
The Floyd-Warshall algorithm: In this section, we shall use a different dynamic-programming formulation to solve the all-pairs shortest-paths problem on a directed graph G = (V, E). The resulting algorithm, known as the Floyd-Warshall algorithm, runs in Θ(V3) time. As before, negative-weight edges may be present, but we assume that there are no negative-weight cycles. As in Shortest paths and matrix multiplication, we shall follow the dynamic-programming process to develop the algorithm. After studying the resulting algorithm, we shall present a similar method for finding the transitive closure of a directed graph. The structure of a shortest path: In the Floyd-Warshall algorithm, we use a different characterization of the structure of a shortest path than we used in the matrix-multiplication-based all-pairs algorithms. The algorithm considers the "intermediate" vertices of a shortest path, where an intermediate vertex of a simple path p = v1, v2,..., vl is any vertex of p other than v1 or vl, that is, any vertex in the set {v2, v3,..., vl-1}. The Floyd-Warshall algorithm is based on the following observation. Under our assumption that the vertices of G are V = {1, 2,..., n}, let us consider a subset {1, 2,..., k} of vertices for some k. For any pair of vertices i, j V, consider all paths from i to j whose intermediate vertices are all drawn from {1, 2,..., k}, and let p be a minimum-weight path from among them. (Path p is simple.) The Floyd-Warshall algorithm exploits a relationship between path p and shortest paths from i to j with all intermediate vertices in the set {1, 2,..., k - 1}. The relationship depends on whether or not k is an intermediate vertex of path p.

Efficient Fft Implementations

6 min read
How does BIT-REVERSE-COPY get the elements of the input vector a into the desired order in the array A? The order in which the leaves appear in Figure 30.4 is a bit-reversal permutation. That is, if we let rev(k) be the lg n-bit integer formed by reversing the bits of the binary representation of k, then we want to place vector element ak in array position A[rev(k)]. In Figure 30.4, for example, the leaves appear in the order 0, 4, 2, 6, 1, 5, 3, 7; this sequence in binary is 000, 100, 010, 110, 001, 101, 011, 111, and when we reverse the bits of each value we get the sequence 000, 001, 010, 011, 100, 101, 110, 111. To see that we want a bit-reversal permutation in general, we note that at the top level of the tree, indices whose low-order bit is 0 are placed in the left subtree and indices whose low-order bit is 1 are placed in the right subtree. Stripping off the low-order bit at each level, we continue this process down the tree, until we get the order given by the bit-reversal permutation at the leaves. Since the function rev(k) is easily computed, the BIT-REVERSE-COPY procedure can be written as follows. The iterative FFT implementation runs in time Θ(n lg n). The call to BIT-REVERSE-COPY(a, A) certainly runs in O(n lg n) time, since we iterate n times and can reverse an integer between 0 and n - 1, with lg n bits, in O(lg n) time. To complete the proof that ITERATIVE-FFT runs in time Θ(n lg n), we show that L(n), the number of times the body of the innermost loop (lines 8 -13) is executed, is Θ(n lg n). The for loop of lines 6 -13 iterates n/m = n/2s times for each value of s, and the innermost loop of lines 8 -13 iterates m/2 = 2s-1 times. Thus,

Insertion In Red Black Tree

6 min read
Insertion: Insertion of a node into an n-node red-black tree can be accomplished in O(lg n) time. We use a slightly modified version of the TREE-INSERT procedure to insert node z into the tree T as if it were an ordinary binary search tree, and then we color z red. To guarantee that the red-black properties are pre- served, we then call an auxiliary procedure RB-INSERT-FIXUP to recolor nodes and perform rotations. The call RB-INSERT(T, z) inserts node z, whose key field is assumed to have already been filled in, into the red-black tree T. There are four differences between the procedures TREE-INSERT and RB-INSERT. First, all instances of NIL in TREE-INSERT are replaced by nil[T]. Second, we set left[z] and right[z] to nil[T] in lines 14-15 of RB-INSERT, in order to maintain the proper tree structure. Third, we color z red in line 16. Fourth, because coloring z red may cause a violation of one of the red-black properties, we call RB-INSERT-FIXUP(T, z) in line 17 of RB-INSERT to restore the red-black properties. To understand how RB-INSERT-FIXUP works, we shall break our examination of the code into three major steps. First, we shall determine what violations of the red-black properties are introduced in RB-INSERT when the node z is inserted and colored red. Second, we shall examine the overall goal of the while loop in lines 1-15. Finally, we shall explore each of the three cases into which the while loop is broken and see how they accomplish the goal. Figure 13.4 shows how RB-INSERT-FIXUP operates on a sample red-black tree.

Dynamic Tables

7 min read
Dynamic tables: In some applications, we do not know in advance how many objects will be stored in a table. We might allocate space for a table, only to find out later that it is not enough. The table must then be reallocated with a larger size, and all objects stored in the original table must be copied over into the new, larger table. Similarly, if many objects have been deleted from the table, it may be worthwhile to reallocate the table with a smaller size. In this section, we study this problem of dynamically expanding and contracting a table. Using amortized analysis, we shall show that the amortized cost of insertion and deletion is only O(1), even though the actual cost of an operation is large when it triggers an expansion or a contraction. Moreover, we shall see how to guarantee that the unused space in a dynamic table never exceeds a constant fraction of the total space. We assume that the dynamic table supports the operations TABLE-INSERT and TABLE-DELETE. TABLE-INSERT inserts into the table an item that occupies a single slot, that is, a space for one item. Likewise, TABLE-DELETE can be thought of as removing an item from the table, thereby freeing a slot. The details of the data-structuring method used to organize the table are unimportant; we might use a stack, a heap, or a hash table. We might also use an array or collection of arrays to implement object storage, as we did in. We shall find it convenient to use a concept introduced in our analysis of hashing. We define the load factor α (T) of a nonempty table T to be the number of items stored in the table divided by the size (number of slots) of the table. We assign an empty table (one with no items) size 0, and we define its load factor to be 1. If the load factor of a dynamic table is bounded below by a constant, the unused space in the table is never more than a constant fraction of the total amount of space.

Longest Common Subsequence

8 min read
Longest common subsequence In biological applications, we often want to compare the DNA of two (or more) different organisms. A strand of DNA consists of a string of molecules called bases, where the possible bases are adenine, guanine, cytosine, and thymine. Representing each of these bases by their initial letters, a strand of DNA can be expressed as a string over the finite set {A, C, G, T}. For example, the DNA of one organism may be S1= ACCGGTCGAGTGCGCGGAAGCCGGCCGAA, while the DNA of another organism may be S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA. One goal of comparing two strands of DNA is to determine how "similar" the two strands are, as some measure of how closely related the two organisms are. Similarity can be and is defined in many different ways. For example, we can say that two DNA strands are similar if one is a substring of the other. In our example, neither S1 nor S2 is a substring of the other. Alternatively, we could say that two strands are similar if the number of changes needed to turn one into the other is small. Yet another way to measure the similarity of strands S1 and S2 is by finding a third strand S3 in which the bases in S3 appear in each of S1 and S2; these bases must appear in the same order, but not necessarily consecutively. The longer the strand S3 we can find, the more similar S1 and S2 are. In our example, the longest strand S3 is GTCGTCGGAAGCCGGCCGAA. We formalize this last notion of similarity as the longest-common-subsequence problem. A subsequence of a given sequence is just the given sequence with zero or more elements left out. Formally, given a sequence X = x1, x2, ..., xm, another sequence Z = z1, z2, ..., zk is a subsequence of X if there exists a strictly increasing sequence i1,i2, ..., ik of indices of X such that for all j = 1, 2, ..., k, we have xij = zj . For example, Z = B, C, D, B is a subsequence of X = A, B, C, B, D, A, B with corresponding index sequence 2, 3, 5, 7.

Elements Of Dynamic Programming

4 min read
The second ingredient that an optimization problem must have for dynamic programming to be applicable is that the space of subproblems must be "small" in the sense that a recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems. Typically, the total number of distinct subproblems is a polynomial in the input size. When a recursive algorithm revisits the same problem over and over again, we say that the optimization problem has overlapping subproblems. In contrast, a problem for which a divide-and-conquer approach is suitable usually generates brand-new problems at each step of the recursion. Dynamic-programming algorithms typically take advantage of overlapping subproblems by solving each subproblem once and then storing the solution in a table where it can be looked up when needed, using constant time per lookup. To illustrate the overlapping-subproblems property in greater detail, let us reexamine the matrix-chain multiplication problem. Referring back to Figure 15.3, observe that MATRIX-CHAIN-ORDER repeatedly looks up the solution to subproblems in lower rows when solving subproblems in higher rows. For example, entry m[3, 4] is referenced 4 times: during the computations of m[2, 4], m[1, 4], m[3, 5], and m[3, 6]. If m[3, 4] were recomputed each time, rather than just being looked up, the increase in running time would be dramatic. To see this, consider the following (inefficient) recursive procedure that determines m[i, j], the minimum number of scalar multiplications needed to compute the matrix-chain product Aij = AiAi 1 ··· Aj. The procedure is based directly on the recurrence (15.12). Figure 15.5 shows the recursion tree produced by the call RECURSIVE-MATRIX-CHAIN(p, 1, 4). Each node is labeled by the values of the parameters i and j. Observe that some pairs of values occur many times.