The solutions to the book “Introduction to Algorithm, 3rd Edition” Jian Li Computer Science Nanjing University, China 2011
1
I Foundations
Copyright by Yinyanghu
2
Chapter 1 Problem 1-1 Comparison of running times log n n n n log n n2 n3
√
2n n!
1 second
1 minute
1 hour
1 day
1 month
1 year
106
6 107
36 108
864 108
2592 109
94608 1010
2 1012 106 62746 103 102 19 9
2·
36 1014 6 107 2801417 24494897 391 25 11 11
· ·
Copyright by Yinyanghu
2 ·
1296 1016 36 108 13 107 6 104 1532 31 12
· · ·
·
2 ·
746496 1016 864 108 27 108 293938 4420 36 13
· ·
·
·
2 6718464 1018 2592 109 71 109 1609968 13736 41 15
·
·
·
1 century 12
·
294608·10 8950673664 1024 94608 1012 69 1012 307584134 455661 56 18
2 8950673664 1020 94608 1010 80 1010 30758413 98169 49 17
·
·
·
·
3
·
·
Chapter 2 Problem 2-1 Insertion sort on small arrays in merge sort a. k Θ(k2 ), n/k sublists Θ(k2 n/k) = Θ(nk) b. Merge Sort n/k sublists Copy return Θ(n(n/k )) = Θ(n2 /k) Θ(n log(n/k)) n/k sublists Θ(n log(n/k)) c. k = Θ(log n) Θ(nk + n log(n/k)) = Θ(nk + n log n n log k) = Θ(2n log n n log log n) = Θ(n log n) Standard Merge Sort
−
−
d. k list Insertion Sort Merge Sort
Problem 2-2 Correctness of bubblesort a. A′ A b. Loop invariant:
A[ j ] = min A[k] : j A[ j . . n]
{
≤ k ≤ n} A[ j . . n]
Initialization: j = n A[ j . . n] A[n] Maintenance: A[ j ] A[ j . . n] A[ j 1] > A[ j ] A[ j ] A[ j 1] A[ j 1] A[ j 1 . . n] A[ j . . n] A[ j . . n] A[ j 1] A[ j ] A[ j 1 . . n] A[ j 1 . . n] j j 1 Termination: j = i A[i] = min A[k] : i k n A[i . . n] A[i . . n]
− −
−
−
−
−
{
−
−
≤ ≤ }
c. Loop invariant:
A[1 . . i 1] A[1 . . n] i 1 A[1] A [2] . . A [i 1] A[i . . n] A n i + 1
−
Copyright by Yinyanghu
≤
− ≤ ≤
−
−
4
Initialization: i = 1A[1 . . i 1] Maintenance: A[1 . . i 1] A[1 . . n] i 1 A[1] A[2] . . A[i 1] lines 2-4 A[i] A[i . . n] A[1 . . i] A[1 . . i] A[i + 1 . . n] A n i Termination: A[1 . . i 1] A[1 . . n 1] A[1 . . n 1] i = n A n 1 A[n] A A[1 . . n] A
−
≤
− ≤ ≤
−
−
−
−
−
−
−
d. Bubble Sort Θ(n2 ) Insertion Sort
Problem 2-3 Correctness of Horner s rule
a. Θ(n) b. Θ(n2 )
Naive-Polynomial-Evaluation(P (x), x) 1 y=0 2 for i = 0 to n 3 t = 1 for j = 1 to i 4 5 t = t x 6 y = y + t ai 7 return y
· ·
c. n
y =
�
ak x k
k=0
(Omit!) d. (Omit!)
Problem 2-4 Inversions a. (1, 5), (2, 5), (3, 5), (4, 5), (3, 4) Copyright by Yinyanghu
5
b.
{n, n − 1, n − 2, . . , 2, 1} = n (n − 1)/2 inversions n
(� 2
c. Insertion-Sort inversion Insertion-Sort inversions d. N Θ(n log n) inversions
Count-Inversions(A, left, right) 1 inversions = 0 2 if left < right mid = (left + right)/2 3 4 inversions = inversions + Count-Inversions(A, left, mid) 5 inversions = inversions + Count-Inversions(A, mid + 1 , right) 6 inversions = inversions + Merge-Inversions (A, lef t, mid, right) 7 return inversions
⌊
Copyright by Yinyanghu
⌋
6
Merge-Inversions (A, left, mid, right)
− −
1 n1 = mid left + 1 2 n2 = right mid 3 Let L[1 . . n1 + 1] and R [1 . . n2 + 1] be new arrays 4 for i = 1 to n 1 L[i] = A[left + i 1] 5 6 for i = 1 to n 2 R[i] = A[mid + i] 7 8 L[n1 + 1] = R[n2 + 1] = 9 i=j =1 10 inversions = 0 11 counted = False 12 for k = left to right if counted = False and L[i] > R[ j ] 13 14 inversions = inversions + n1 i + 1 15 counted = True if L[i] R [ j ] 16 17 A[k] = L[i] i = i+1 18 19 else A[k] = R[ j ] j = j + 1 20 21 counted = False 22 return inversions
−
∞
−
≤
Answer = Count-Inversions(A, 1, n) Merge-Sort Θ(n log n)
Copyright by Yinyanghu
7
Chapter 3 Problem 3-1 Asymptotic behavior of polynomials (Omit!)
Problem 3-2 Relative asymptotic growths A logk n nk n
√
B nϵ cn nsin n
2n
2n/2
nlog c log n!
clog n log nn
O
Yes Yes No No Yes Yes
o Yes Yes No No No No
Ω No No No Yes Yes Yes
ω No No No Yes No No
Θ No No No No Yes Yes
Problem 3-3 Ordering by asymptotic growth rates a. f (n)
n+1
n
22 22 2n (3/2)n 4log n = n 2
√ ( 2) 2log
∗
log n
n
(n + 1)! n! nlog log n = (log n)log n n log n and log (n!)
√
log2 n log∗ n and log ∗ (log n)
2
2 log n
ln n log(log∗ n)
en n 2n (log n)! n3 2log n = n
·
√ log n lnln n n1/ log n and 1
b. (n+1)·cos n
f (n) = 22
Problem 3-4 Asymptotic notation properties a. Wrong! n =
O(n ) n ̸= O(n) 2
2
b. Wrong! f (n) = n2 , g(n) = n Θ(min(f (n), g (n))) = Θ(n) f (n) + g (n) = n 2 + n = Θ(n)
̸
Copyright by Yinyanghu
8
c. Right! f (n) = (g (n)) log(g (n)) 1, f (n) 1 when n log(f (n)) log(cg (n)) = log c + log (g (n)) b = f (n) cg (n) log c + log(g (n)) b log(g (n)) log(f (n)) = (log(g (n))) log c + 1
O ⇒
≤
≥
≤ ≤
2
̸= O(2
g(n)
→ ∞
O
d. Wrong! f (n) = 2n, g(n) = n f (n) = 2n
≥
n
=2 )
O (g(n)) 2
f (n)
=
e. Wrong! f (n) < 1 f. Right! c > 0 f (n) g. Wrong! f (n) = 2n f (n) = 2n
≤ cg (n) ⇒ 1/c · f (n) ≤ g (n) = ̸ Θ(f (n/2) = 2√ ) n
h. Right! f (n) + o(f (n)) = Θ(max(f (n), o(f (n)))) = Θ(f (n))
Problem 3-5 Variations on a.
f (n) =
O O
O and Ω
∞
(g (n)) & Ω(g(n)) if f (n) = Θ(g (n)) (g (n)) if 0 f (n) cg (n)
≤ ≤ ∞ Ω(g(n)) if 0 ≤ cg (n) ≤ f (n), for infinitely many integers n n f (n) ≥ cg (n) ≥ 0 n ∞ 0 ≤ f (n) ≤ cg (n) f (n) = O(g (n)) ∞ Ω Ω
b. Advantage: Disadvantage: c. For any two functions f (n) and g (n), we have if f (n) = Θ (g (n)) then f (n) = ′ (g(n)) and f (n) = Ω(g (n)) But the converse is not true
O
∼
{
d. Ω(g (n)) = f (n) : there exist positive constants c, k , and n0 such that 0 cg (n) logk (n) f (n) for all n n 0
∼
≤
≤
≥ }
Θ(g (n)) = f (n) : there exist positive constants c 1 , c2 , k1 , k2 , and n 0 such that 0 c 1 g (n) logk1 (n) f (n) c 2 g (n) logk2 (n) for all n n 0
≤
{
≤
≤
∼
≥ }
For any two functions f (n) and g (n), we have f (n) = Θ(g (n)) if and only if f (n) =
∼
∼
O(g(n)) and f (n) = Ω(g(n))
Copyright by Yinyanghu
9
Problem 3-6 Iterated functions f (n) n 1 log n n/2 n/2 n n n1/3 n/ log n
−
√ √
Copyright by Yinyanghu
c
0 1 1 2 2 1 2 2
f c∗ (n) n log∗ (n) log n 1 log n log log n
⌈ ⌉ ⌈ ⌉− ≥ ∞ ≥ O(log n) loglog n log 3
10
Chapter 4 Problem 4-1 Recurrence examples a. T (n) = Θ(n4 ) b. T (n) = Θ(n) c. T (n) = Θ(n2 log n) d. T (n) = Θ(n2 ) e. T (n) = Θ(nlog2 7 )
√
f. T (n) = Θ( n log n) g. T (n) = Θ(n3 )
Problem 4-2 Parameter-passing costs a.
b.
⇒ T (n) = Θ(log n) 2. T (n) = T (n/2) + Θ(N ) ⇒ T (n) = Θ(N log n) 3. T (n) = T (n/2) + Θ(n) ⇒ T (n) = Θ(n) 1. T (n) = 2T (n/2) + Θ(n) ⇒ T (n) = Θ(n log n) 2. T (n) = 2T (n/2) + Θ(N ) ⇒ T (n) = Θ(nN ) 3. T (n) = 2T (n/2) + Θ(n) ⇒ T (n) = Θ(n log n)
1. T (n) = T (n/2) + Θ(1)
Problem 4-3 More recurrence examples a. T (n) = Θ(nlog3 4 ) b. T (n) = Θ(n log log n)
√
c. T (n) = Θ(n2 n) d. T (n) = Θ(n log n) e. T (n) = Θ(n log log n) f. T (n) = Θ(n) g. T (n) = Θ(log n) h. T (n) = Θ(n log n) Copyright by Yinyanghu
11
i. T (n) = Θ(n/ log n) j. T (n) = Θ(n log log n)
Problem 4-4 Fibonacci numbers a. (Omit!) b. (Omit!) c.
F (z ) =
∞ 1
� √ i=0
5
(φi
− φˆ )z
d. φˆ < 1 i
||
Fibonacci
i
→ ∞
i
φˆi
√ → 0 F = φ / 5 for i > 0 i
i
Problem 4-5 Chip testing Note
a.
b. n n/2 AB P Q good > bad Q good bad P good > bad Q P
⌊ ⌋
≤
12 c. T (n) = T (n/2) + n/2
⇒ T (n) = Θ(n)
Problem 4-6 Monge arrays a. b. A[2, 3] = 5 Copyright by Yinyanghu
12
c. f (i) > f (i + 1) A[i, j ] + A[i + 1, k] > A[i, f (i)] + A[i + 1, f (i+1)] Monge array f (i) > f (i+1) f (1) f (2) . . f (m)
≤
≤ ≤ d. (c) O (m + n) e. T (m) = T (m/2) + O(m + n) ⇒ T (m) = O(m + n log m)
Copyright by Yinyanghu
13
Chapter 5 Problem 5-1 Probabilistic counting a. X i (1
≤ i ≤ n) i V i i
V n =
∑
n i=1
X i
∑
E [V n ] = E [
n i=1
n i=1
X i ] =
E [X i ] = 0 (1
· −
∑
E [X i ]
1
1 − + (ni+1 − ni) · n +1−n ) = 1 E [V n] = n
ni+1 ni
i
i
b. X i V ar[V n ] = V ar[X 1 ] + V ar[X 2 ] + . . + V ar[X n ]
V ar[X i ] = E [X i2 ]
2
2
− E [X ] = ((0 · i
99 ) 100
+ (1002
·
1 )) 100
2
−1
= 99
V ar[V n ] = 99n
Problem 5-2 Searching an unsorted array a. : RANDOM-SEARCH ( A, x)
∅
1 B= 2 while B = A i = RANDOM(1, n) 3 if A [i] = x 4 return i 5 6 B = B A[i] 7 return Null
̸
∪
b. n x c. n/k x d. Θ(n log n) A e. Average-Case:
n+1
2
Copyright by Yinyanghu
Worst-Case: n
14
f. Average-Case: n k+1
−
− − =
n i k 1 n k
n+1 k+1 n k
� (( � � (( �� i=1
i
Worst-Case: n
=
n+1 k + 1
− k + 1
g. Average-Case: n Worst-Case:n h. K = 0: Worst-Case: n, Expected: n
K = 1: Worst-Case: n, Expected:
n+1
2
≥ 1, Worst-Case: n − k + 1 , Expected:
For K
n+1 k+1
i. Scramble-Search
Copyright by Yinyanghu
15
II Sorting and Order Statistics
Copyright by Yinyanghu
16
Chapter 6 Problem 6-1 Building a heap using insertion a. No, counterexample: 1, 2, 3 b. Worst-Case
O(n log n)
n
� � ≥ � ≥ �
T (n) =
⌊
⌋
Θ( log i )
i=1 n
⌊
i= 2 n
⌈ ⌉
⌊ ⌈ n2 ⌉)⌋)
Θ( log(
i= n 2 n
⌈ ⌉
=
⌋
Θ( log i )
n
⌊
Θ( log n
i= n 2
⌈ ⌉
− 1⌋) = Ω(n log n)
Worst-Case Θ(n log n)
Problem 6-2 Analysis of d-ary heaps a. index i , j -th child (1
≤ j ≤ d)
D-ary-Parent(i)
⌊ − 2)/d + 1⌋
1 return (i
D-ary-Child(i, j ) 1 return d (i
− 1) + j + 1
b. Θ(logd n) c. Heap-Extract-Max Max-Heapify 2 d (d logd n)
O
d. Max-Heap-Insert
O(log n) e. Heap-Increase-Key O(log n) d
d
Copyright by Yinyanghu
17
Problem 6-3 Young tableaus a.
2 8 12 16
3 9 14
4
5
∞ ∞ ∞ ∞ ∞ ∞ ∞
b. (Omit!) c. Max-Heapify
O(m + n) Young Tableau
d.
O(m + n) ∞ O(m + n) Young Tableau e. Young-Tableau-Insert: O (n ) Young-Tableau-Extract-Min: O(n ) Total: O(n ) f. m × nYoung Tableau Binary Search Tree BST O(m + n) 3
3
3
Copyright by Yinyanghu
18
Chapter 7 Problem 7-1 Hoare partition correctness a. (Omit!) b. Hoare-Partition p p . . r ] A[ p
≤ i < j ≤ r i, j
c. i
≥ j Hoare-Partition p ≤ j < r d. Hoare-Partition p . . j ] ≤ x ≤ A [ j j + 1 . . r ] A[ p e. the QUICKSORT procedure to use HOARE-PARTITION Quicksort’(A,p,r ) 1 2 3 4
if p p < r q = Hoare-Partition(A,p,r ) Quicksort’(A,p,q ) + 1 , r ) Quicksort’(A, q +
Problem 7-2 Quicksort with equal element values a.
O(n log n)
b. two methods: Method 1: Partition’(A,p,r )
p] 1 x = A[ p 2 i=p 1 3 j = r + 1 while True 4 while True 5 repeat 6 j=j 1 j ] < x 7 until A [ j 8 repeat 9 i = i + 1 10 until A [i] > x if i 11 i < j j ] 12 exchange A[i] with A[ j else return j, i 13
−
−
Copyright by Yinyanghu
19
Method 2: Partition’(A,p,r ) 1 i = j = Partition(A,p,r ) 2 while i > 1 and A[i 1] = A[i] 3 i=i 1 j + 1] = A[ j j ] 4 while j < r and A[ j 5 j = j + 1 6 return i, j
−
−
take Θ( r
− p) time
c. (Omit!) d. Quicksort’ Quicksort
O(n log n)
Problem 7-3 Alternative quicksort analysis a. E [X i ] = 1/n b. n
E [T (n)] = E [
�
X q (T (q
q =1
− 1) + T (n − q ) + Θ(n))]
c.
2
E [T (n)] =
n
n 1
−
�
)] + Θ(n) E [T (q )]
q =2
d. n
⌈ 2 ⌉−1
n 1
−
� k=2
� ≤
k log k
n
k log k +
�
k log k
n
k=1
k= 2 n
⌈ ⌉
( n2 )2 ( 2 + n) n2 n log + log n 2 2 2 3 n2 n2 = + n2 log n log n 8 8 8 2 n 1 = n2 log n 2 8
≤
− −
e. E [T (n)] = Θ(n log n) Copyright by Yinyanghu
20
Problem 7-4 Stack depth for quicksort a. (Omit!) b. Worst-Case q = r n Stack Θ(n) c. The worst-case stack depth is Θ(log n), and the expected running time is (n log n)
O
Tail-Recursive-Quicksort’ (A,p,r ) 1 while p < r // Partition and sort the small subarray first 2 3 q = Partition(A,p,r ) if q p < r q 4 5 Tail-Recursive-Quicksort’ (A,p,q 1) 6 p = q + 1 7 else Tail-Recursive-Quicksort’ (A, q + 1 , r) 8 r =q 1
−
−
−
−
Problem 7-5 Median-of-3 partition i 1)(n−i) a. P i = n6((n− −1)(n−2) b.
1 n
lim P ⌊(n+1)/2⌋ =
→∞
n
3 1 2 n
·
c. 2n 3
2n 3
� ≈ ∫ P i
i= n 3
n
P i =
3
13 27
d. Partition (1) Θ(n log n) Median-of-3 method
O
Problem 7-6 Fuzzy sorting of intervals a. A randomized algorithm for fuzzy-sorting n intervals
Copyright by Yinyanghu
21
Find-Intersection(A,B,p,s,a,b) 1 2 3 4 5 6 7 8 9 10 11
i = Random( p, s) exchange A[i] with A[s] exchange B [i] with B [s] a = A[s] b = B [s] for i = p to s 1 if A [i] b and B [i] if A [i] > a a = A[i] elseif B [i] < b b = B [i]
−
≤
≥ a
Partition-Right(A,B,a,p,s) 1 2 3 4 5 6 7 8 9
i=p 1 for j = p to s 1 if A [ j ] a i = i+1 exchange A[i] with A[ j ] exchange B [i] with B [ j ] exchange A[i + 1] with A[s] exchange B [i + 1] with B [s] return i + 1
−
≤
−
Partition-Left-Middle(A,B,b,p,r) 1 2 3 4 5 6 7 8 9
−
i=p 1 for j = p to r 1 if B [ j ] < b i = i+1 exchange A[i] with A[ j ] exchange B [i] with B [ j ] exchange A[i + 1] with A[r] exchange B [i + 1] with B [r] return i + 1
−
Copyright by Yinyanghu
22
Fuzzy-Sort (A,B,p,s) 1 2 3 4 5 6 7
if p < s a=b=0 Find-Intersection(A,B,p,s,a,b) r = Partition-Right(A,B,a,p,s) q = Partition-Left-Middle(A,B,b,p,r) Fuzzy-Sort(A,B,p,q 1) Fuzzy-Sort(A,B,r + 1 , s)
−
b. The algorithm runs in expected time Θ( n log n) in general, but runs in expected time Θ(n) when all of the intervals overlap. Since when all of the intervals overlap, the procedure Fuzzy-Sort just calls the procedure Find-Intersection only once and then return. Therefore, the algorithm runs in expected time Θ(n).
Copyright by Yinyanghu
23
Chapter 8 Problem 8-1 Probabilistic lower bounds on comparison sorting a. reachable leaves: each of the n ! possible permutations is the input with the probability 1/ n! unreachable leaves: the probability is 0 b. Let d T (x) = depth of node x in tree T
D(T ) =
� � � �
dT (x)
x leaves(T )
∈
=
dT (x) +
x leaves(LT )
∈
=
�
x leaves(RT )
∈
(dLT (x) + 1) +
x leaves(LT )
∈
=
�
(dRT (x) + 1)
x leaves(RT )
∈
dLT (x) +
x leaves(LT )
∈
dT (x)
�
x leaves(RT )
= D (LT ) + D(RT ) + k
∈
dRT (x) +
�
1
x leaves (T )
∈
c. A decision tree T with k leaves that the left tree has i leaves and the right tree has k i leaves.
−
d(k) =
min
{d(i) + d(k − i) + k}
1 i k 1
≤≤ −
−
d. Using derivative, it’s easy to have the function i log i + (k i) log(k minimized at i = k /2 When i = k /2, using Master Theorem, we have d (k) = Ω(k log k)
− i) is
≥
e. Let k = n !, then D(T A ) d (k) = Ω(k log k), therefore D (T A ) = Ω(n! log n!) Since the n ! permutations have equal probability 1/n!, the expected time to sort n random elements is
Ω(n! log(n!)) = Ω(log n!) = Ω(n log n) n!
Problem 8-2 Sorting in place in linear time a. Counting-Sort b. Quicksort-Partition Copyright by Yinyanghu
24
c. Insertion-Sort d. (a) Yes (b) No (c) No e. use
O(k) outside the input array
Counting-Sort(A, k ) 1 2 3 4 5 6 7 8 9 10 11
// let C [0 . . k ] be a new array
for i = 0 to k C [i] = 0 for i = 1 to A. length C [A[i]] = C [A[i]] + 1 // C [i] now contains the number of elements equal to i p=0 for i = 0 to k for j = 1 to C [i] p = p+1 A[ p] = i
Not stable, in place, in
O(n + k)
f. Extension: Is there exist an algorithm s.t. stable, in place, runs in time ?
O(n)
Problem 8-3 Sorting variable-length items a. Let m i be the number of integers with i digits, for i = 1, 2, . . , n, we have n i=1 i mi = n
∑
·
O
* Counting every number’s digits..... (n)
O
* Sort the integers by number of digits(Counting-Sort)..... (n) * Use Radix-Sort to sort each group of integers with the same length..... Therefore, the total running time is b.
O(n)
O(n)
* Use Counting-Sort to sort the strings by the first letter * If the first letter of some strings are same, then put these strings into a group as well as remove the first letter * Recursion sort until each group has only one string The running time is
Copyright by Yinyanghu
O(n) 25
Problem 8-4 Water jugs a. n Red jug n Blue jug Θ(n2 ) b. Decision Tree Red jug Blue jug Red jug Blue jug n Red jug n Blue jug n! Decision Tree h = log (n!) = Ω(n log n) c. the expected running time is (n2 )
O
O(n log n), and the worst-case running time is
Partition-Red(R,p,r,x)
−
1 i=p 1 2 for j = p to r if R [ j ] x 3 4 i = i+1 5 exchange R [i] with R [ j ] 6 return i
≤
Partition-Blue(B,p,r,x) 1 2 3 4 5
−
i=p 1 for j = p to r if B [ j ] x i = i+1 exchange B [i] with B [ j ]
≤
Match-Jugs (R,B,p,r ) 1 2 3 4 5 6
if p < r k = Random( p, r) q = Partition-Red(R,p,r,B [k]) Partition-Blue(B,p,r,A[q ]) Match-Jugs(R,B,p,q 1) Match-Jugs(R,B,q + 1 , r )
−
Problem 8-5 Average sorting a. Ordinary Sorting b. 2, 1, 4, 3, 6, 5, 8, 7, 10, 9 Copyright by Yinyanghu
26
c. (Omit!) d. Shell-Sort, i.e. We split the n-element array into k part. For each part, we use Insertion-Sort(or Quicksort) to sort in (n/k log(n/k)) time. Therefore, the (n/k log(n/k)) = (n log(n/k)) total running time is k
O
·O
O
O
e. Using a heap, we can sort a k -sorted array of length n in (n log k) time. (The height of the heap is log k, the solution to Exercise 6.5-9) f. The lower bound of sorting each part is Ω(n/k log(n/k)), so the total lower bound is Ω( n log(n/k)). Since k is a constant, therefore Ω( n log(n/k)) = Ω(n log n)
Problem 8-6 Lower bound on merging sorted lists a.
2n
(� n
b. 2 list 2n o(n)
−
c.
d. list 2n
−1
Problem 8-7 The 0-1 sorting lemma and columnsort a. A[ p] A[q ] A[ p] < A[q ] B [ p] = 0, B [q ] = 1 b. A[ p] q < p ( p > q, A[ p] < A[q ]) ( p > q, B [ p] = 0 < B [q ] = 1) Alogrithm X oblivious Algorithm X A compare-exchange algorithm B Algorithm X B
⇒
c. oblivious compareexchange algorithm columnsort oblivious compare-exchange algorithm d. step 1 1 step 2, 3 1 1 dirty row 0 1 1 0 (<) s rows dirty rows Copyright by Yinyanghu
27
e. clean area of 0s clean area of 1s step 3 s rows dirty rows dirty area s2 f. step 5 step 6 array
r/2 s2 dirty area s2 step 5
≥
0 0 0 0 0 0
0 0 0 0 1 1
0 0 0 1 1 1
1 0 step 7, 8 array column-major order g. s r every column 2 dirty rows 0 1 dirty row dirty row step 3 dirty cows 1 1 1 1 column 2 dirty rows 2 rows column 1 step 3 dirty cows s r 2 s2
≥
h. r r
≥ 2s (example. ∞) ≥ 2s 2
2
Copyright by Yinyanghu
28
Chapter 9 Problem 9-1 Largest i numbers in sorted order a. Θ(n log n) b. Θ(n + i log n) c. Θ(n + i log i)
Problem 9-2 Weighted median a. Since all the element has same weight wi = 1/n, therefore the median of x1 , x2 , . . , xn is the weighted median of the xi for i = 1, 2, . . , n
O
* Sorting the n element into increasing order by xi values..... (n log n)
b.
O
* Locate the weighted median of n elements..... (n) Therefore, the total running time is
O(n log n)
c. a linear-time median algorithm Weighted-Median(X,p,r) 1 2 3 4 5 6 7 8 9 10 11
q = Randomized-Partition(X,p,r) q −1 W L = i= p X [i] r W R = i=q+1 X [i] if W L < 1/2 and W R 1/2 return q if W L 1/2 w[q 1] = w [q 1] + W R return Weighted-Median(X,p,q 1) if W R > 1/2 w[q + 1] = w [q + 1] + W L return Weighted-Meidan(X, q + 1 , r )
∑∑
≥ −
≤
−
−
Analyse: The recurrence for the worst-case running time of Weighted-Median is T (n) = T (n/2 + 1) + Θ(n). The solution of the recurrence is T (n) = Θ(n) d. Let p be the weighted median. For any point x, let f (x) = We want to find a point x such that f (x) is minimum.
Copyright by Yinyanghu
∑
n i=1
| − p |.
wi x
i
29
Assume f ( p) is minimum, we will prove that for any other point x, f ( p) f (x). For any p and x such that x > p, we have
≤
n
f (x)
− f ( p) =
�
| − p | − | p − p |)
wi ( x
i=1
i
i
≤ p < x, we have |x − p | − | p − p | = x − p * When p < p ≤ x , we have |x − p | − | p − p | ≥ p − x * When p < x ≤ p , we have |x − p | − | p − p | = p − x * When pi
i
i
i
i
i
i
i
i
Therefore, n
f (x)
� � ≥
− f ( p) =
i=1
| − p | − | p − p |)
wi ( x
x) +
wi (x
≥
p pi
− p)(
wi
≥
− p)
wi )
p
p pi
≥ 0
i
� − � −�
wi ( p
p
= (x
i
≥
For p > x, we have the same result. Thus, f (x) f ( p), i.e. the weighted median is a best solution for the 1-dimensional post-office location problem. e. Find a point p (x, y ) to minimize the sum n
f (x, y ) =
�
wi ( x
| − x | + |y − y |)
i=1
i
i
| − x | + |x − y |, it is obvious that
Since d (a, b) = xa
b
b
b
min f (x, y ) = min g(x) + min h(y) x,y
x
y
Therefore, the best solution for the 2-dimensional post-office location problem reduce to the best solution for the 1-dimensional post-office location problem that x and y are independent.
Problem 9-3 Small order statistics a.
1. divide all the elements into two group: Group 1: a1 a3 a5 Group 2: a2 a4 a6
·· · ·· ·
Copyright by Yinyanghu
30
2. pairwise compare the elements of the two group, and put the bigger of the two elements into a new group, called bigger group, and put the others into another group, called smaller group ..... n2
⌊ ⌋
n
⌈ ⌉)
3. recursively find the i-th smallest element in the smaller group ..... U i (
2
4. meanwhile, we can get the smallest i numbers in the smaller group and their corresponding elements in the bigger group
5. the i-th smallest elements must be in these 2i elements, then find it ..... T (2i) Therefore, we have
U i (n) =
�
T (n) n
if i n
≥ n/2
⌊ ⌋ + U (⌊ ⌋) + T (2i) otherwise i
2
2
O(log
b. the depth of recursion is k = n + (T (2i) log ni )
O
n i
) therefore, if i < n/2, then U i (n) =
c. since i is a constant less than n /2, U i (n) = n +
O(log n)
d. If k > 2 , then U i (n) = n + O(T (2n/k) log k) If k = 2, i.e. log k = 1, then
U i (n) = T (n)
− −
= n + (T (n) n) n + ( T (2i) n) = n + (T (2n/k) n) = n + (T (2n/k) log k n) = n + (T (2n/k) log k)
≤
O
−
−
Problem 9-4 Alternative analysis of randomized selection a.
2 if i ≤ k < j or i < k ≤ j − E [X ijk ] = k−2i+1 if i < j ≤ k 2 if k ≤ i < j j −k +1 j i+1
Copyright by Yinyanghu
31
b.
E [X k ]
≤
2
i+1
≤ ≤ j
i k j k
=
n
i=1 j =k k
+
i+1
j
n
i=1 j =k
+
1
j
i+1
k
j
k 2 k 1
2
= 2(
c. E [X k ]
2
2
� � � − − − �� �� �� − − − �� � −− � −− −i+1
−
−
i=1 j =i+1 n
+
j =k+1
j j
+
k
n 1
2
k
−
k + 1
j
i+1
+
k 1 + k + 1
−
n
i=k+1 j =i+1
k 2
− k
i=1
k
−
j
2 k + 1
i 1 ) i+1
≤ 4n
d. the expected running time is T (n) = E [X k ] +
Copyright by Yinyanghu
O(n) = O(n)
32
III Data Structures
Copyright by Yinyanghu
33
Chapter 10 Problem 10-1 Comparisons among list Search(L, k) Insert(L, x) Delete(L, x) Successor(L, x) Predecessor(L, x) Minimum(L, x) Maximum(L, x)
unsorted, singly linked (n)
O O(1) O(n) O(1) O(n) O(n) O(n)
sorted, singly linked (n) (n) (n)
O O O O(1) O(n) O(1) O(n)
unsorted, doubly linked (n)
O O(1) O(1) O(1) O(1) O(n) O(n)
sorted, doubly linked (n) (n)
O O O(1) O(1) O(1) O(1) O(1)
Problem 10-2 Mergeable heaps using linked lists Binomial heap mergeable heap linked list Make-Heap: (1) Insert: (log n) Minimum: (log n) Extract-Min: (log n) Union: (log n)
O O O O O
Problem 10-3 Searching a sorted compact list a. Compact-List-Search Compact-List-Search’ RANDOM(1, n) Compact-List-Search’ Compact-ListSearch for, while t b. The expected runnning time of Compact-List-Search’( L,n,k,t) is E [X t ])
O(t +
c. n
�· { } � { ≥ } �− ≤
E [X t ] =
i P r X t = i
i=0 n
=
P r X t
i
i=1 n
(1
r /n)t
i=1
Copyright by Yinyanghu
34
d. n 1
−
n 1
� ≤ ∫ r
t
r =0
−
0
rt dr
≤ n
t+1
/(t + 1)
e. n
E [X t ]
� ≤
(1
i=1
− r/n)
t
=
1 nt
n 1
−
� r =0
rt
≤ n/(t + 1)
O(t + E [X ]) = O(t + n/(t +1)) = O √ g. Compact-List-Search runs in O(max(t, n/t)) = O( n) expected time f. Compact-List-Search’( L,n,k,t) runs in (t + n/t) expected time
t
h. key
Copyright by Yinyanghu
35
Chapter 11 Problem 11-1 Longest-probe bound for hashing a. n
≤ m/2 α = n/m ≤ 1/2Random Variable X − = probe P r{X > k} = P r{X ≥ k + 1} ≤ α − (k+1) 1
2
k
b. (a) k = 2 log n i Insertion 2 log n probes (2−k ) = (2−2 log n ) = (1/n2 )
O
O
O
c. n
� }≤
{
P r X > 2 log n
} ≤ n · n1
{
P r X i > 2 log n
i=1
=
2
1 n
d. n
�· { } � · { } � · { } � ⌈ ⌉· { } � · { ≤ � { } � { ⌈ ⌉
E [X ] =
k P r X = k
k=1
=
⌈2 log n⌉
n
k P r X = k +
k=1
k P r X = k
k= 2 log n +1
⌈
⌈2 log n⌉
⌉
n
2 log n
P r X = k +
k=1
n P r X = k
k= 2 log n +1
⌈
= 2 log n
⌈2 log n⌉
⌉
}
n
P r X = k + n
k=1
P r X = k
k= 2 log n +1
⌈
⌉
}
≤ ⌈2 log n⌉ · 1 + n · (1/n) = ⌈2 log n⌉ + 1 = O(log n)
Problem 11-2 Slot-size bound for chaining a.
1
Qk = ( )k (1 n
Copyright by Yinyanghu
��
1 n−k n )
−n
k
36
b. Let random variable X i denote the number of keys hash to slot i. Frome (a), we have P r X i = k = Q k
{
}
{ } = P r {( max X ) = k } ≤≤
P k = P r M = k 1 i n
i
n
� ≤
{
P r X i = k
i=1
}
= nQ k c.
1
− n1 ) − k!(nn−! k)!
Qk = ( )k (1 n n! < k n k!(n <
n k
− k)!
1 k!
ek < k k d. (Omit!) e. n
�· { �· { � · { ≤ � {
E [M ] =
k P r M = k
k=0
}
k0
=
n
}
k P r M = k +
k=0
· {
}
· {
}
k P r M = k
k=k0 +1
k0
n
}
k0 P r M = k +
k=0
n P r M = k
k=k0 +1
k0
= k 0
� � � n
}
P r M = k + n
k=0
{
P r M = k
k=k0 +1
}
· { ≤ k } + n · P r{M > k } P r{M ≤ k } ≤ 1 = k 0 P r M
0
0
0
{
}
P r M > k0 = Copyright by Yinyanghu
n
n
�
�
k=k0 +1
{
}
P r M = k <
k=k0 +1
1 n2
· n1
2
=
1 n 37
E [M ]
≤ k · 1 + n · n1 = k + 1 = O(log n/ log log n) 0
0
Problem 11-3 Quadratic probing a. The probe sequence is < h(k), h(k ) + 1, h(k) + 1 + 2, h(k) + 1 + 2 + 3,
1 2
1 2
h′ (k, i) = (h(k) + i + i2 )
··· >
mod m.
b. We show that for any key k and for any probe number i and j such that 0 i < j < m, we have h ′ (k, i) = h ′ (k, j ). If assume that h ′ (k, i) = h′ (k, j ), it yields a contradiction. Therefore, this algorithm examines every table position in the worst case.
≤
̸
Problem 11-4 Hashing and authentication
Copyright by Yinyanghu
38
Chapter 12 Problem 12-1 Binary search trees with equal keys 2
O (n ) b. O(n log n) c. O(n) a.
d. Worst-Case: (n2 ), Expected runnning time: (n log n)
O
O
Problem 12-2 Radix trees n strings strings Radix tree Θ(n) strings Radix tree PreorderTraverse Θ(n)
Problem 12-3 Average node depth in a randomly built binary search tree a.
�
d(x, T ) = P (T )
∈
x T
1 n
�
d(x, T ) =
∈
x T
1 n
P (T )
b. P (T ) = P (T L ) + P (T R ) + n
−1
c. Randomly built binary search tree
P (n) =
1 n
n 1
−
�
(P (i) + P (n
i=0
− i − 1) + n − 1)
d.
P (n) =
2 n
n 1
−
� k=1
P (k) +
(n
− 1) n
2
=
2 n
n 1
−
�
P (k) + Θ(n)
k=1
e. Problem 7-3 Alternative quicksort analysis P (n) =
O(n log n)
f. Binary Search Tree Leaves Quicksort Pivot Copyright by Yinyanghu
39
Problem 12-4 Number of different binary trees a. 1 b0 = 1 n
n 1
bn =
−
� k=0
≥ 1)
bk bn−1−k (n
b.
B (x) =
∞
�
bn xn
n=0
n 1
bn =
−
� k=0
bk bn−1−k
B (x) = xB (x)2 + 1
1 (1 B (x) = 2x
√ − 1 − 4x)
√ 1 − 4x Taylor √ 1 − 4x = 1 − 2 x − 4 x − 24 x − 240 x − · · · 1! 2! 3! 4!
c.
2
3
4
1 2 4 24 240 4 x + ( x + x2 + x3 + 2x 1! 2! 3! 4! 1 2 12 120 3 x + = + + x2 + 1! 2! 3! 4! ∞ (2n 2)! = xn−1 n(n 1)!(n 1)! n=1
B (x) =
··· )
···
=
� − − − � �� � ∞
n=0
=
∞
1 2n n x n+1 n
bn x n
n=0
Copyright by Yinyanghu
40
��
1 2n bn = n+1 n
d. Stirling s approximation
n! =
√
1 n 2πn ( )n (1 + Θ( )) e
n
��
1 2n bn = n+1 n
Copyright by Yinyanghu
=
4n
√ πn
3 2
(1 +
O( n1 ))
41
Chapter 13 Problem 13-1 Persistent dynamic sets a. Insertion: ancestors Deletion: successor ancestors b. Persistent-Tree-Insert (T. root , k ) T ′ Persistent-Tree-Insert (t, k ) 1 if t = NIL 2 x. key = k x. left = x. right = NIL 3 4 else x. key = t. key x. left = t. left 5 6 x. right = t. right if k < t. key 7 x. left = Persistent-Tree-Insert(t. left , k ) 8 9 else x. right = Persistent-Tree-Insert (t. right , k ) 10 return x c.
O(h)
d. parent copy Ω(n)
O
e. We can still find the parent pointers we need in (1) time without using parent pointers by using a stack to record the ancestors of the node. We can also maintain the properties of Persistent Red-Black Tree in (log n) time.
O
Problem 13-2 Join operation on red-black trees a. (Omit!) b. T 1 . bh T 2 . bh Red-Black Tree T 1 Root Right-Child Right-Child Left-Child Black y y. bh = T 2 . bh y key (log n)
≥
O
c. (1)
O
Copyright by Yinyanghu
42
RB-Join’(T y , x , T2 ) 1 2 3 4 5 6 7
z. left = T y z. right = T 2 z. parent = T y . parent z. key = x if T y . parent . left = T y T y . parent . left = z else T y . parent . right = z
d. Red
T y , T 2 Root Black x parent Red x Black T y , T 2 Root Red Black-Height (log n)
O
e. T 1 . bh
≤ T . bh f. The running time of RB-Join is O (log n) 2
Problem 13-3 AVL trees a. h AVL tree Gh
Gh = G h−1 + Gh−2 + 1 n G0 = 1
≥ 2
G1 = 2
Gh F h (the h-th Fibonacci number) n AVL tree (log n) (golden ratio)
≥
O
b. left-left, left-right, right-left, right-right Balance(x)
Copyright by Yinyanghu
43
c. the procedure AVL-Insert(x, z ) consist of BST-Tree-Insert(T, z ) and Balance(x)
O O
d. BST-Tree-Insert(T, z ): (log n) Balance (x): (log n) AVL-Insert (x, z ) Total: (log n)
O
Copyright by Yinyanghu
44
Problem 13-4 Treaps a. BST Rotation BST BST Treap Heap Rotation BST b. priority Treap BST priority Treap Randomly built binary search trees the expected height of a treap is Θ(log n) c. the procedure Treap-Insert consist of BST-Tree-Insert and Heap-Increase-Key by Rotation
O O O
d. BST-Tree-Insert: (log n) Heap-Increase-Key: (log n) (Rotation: Treap-Insert Total: (log n)
O(1))
e. leaf node x x Rotation C + D 1
Right-Rotation x P A, B x Q P parent C Q C + D = left _ spine(B ) + right _ spine(A) Right-Rotation C ′ + D ′ = right _ spine(A) + 1 + left _ spine(B )
Left-Rotation x Rotation C + D f. (Omit!) g.
{
}
P r X ik = 1 =
Copyright by Yinyanghu
(k (k
1 − i − 1)! = − i + 1)! (k − i + 1)(k − i)
45
h.
E [C ] =
k 1
k 1
�
�
−
i=1
P r X ik = 1 =
{
}
i. k = n
E [D] = 1
−
i=1
(k
−
1 i + 1)(k
k 1
− i)
=
−
1 1 =1 j ( j + 1) k j =1
�
−
− k + 1
− n − 1k + 1
j. E [C + D ] = E [C ] + E [D] < 2 Treap Rotation 2
Copyright by Yinyanghu
46
Chapter 14
Copyright by Yinyanghu
47
IV Advanced Design and Analysis Techniques
Copyright by Yinyanghu
48
Chapter 15 15-1 Longest simple path in a directed acyclic graph Let f (i) be the longest distant from vertex s to vertex i Recursive formulation:
f (i) = max f ( j ) + d ji
{
( j,i) E
∈
}
initial case:
f (s) = 0 Visit every vertex in Graph G by topological order Running time: (V + E )
O
15-2 Longest palindrome subsequence Let f (i, j ) be the length of the longest palindrome subsequence from the i th character to the j th character Recursive formulation:
f (i, j ) =
Running time:
1, 0, f (i + 1, j 1) + 2, max f (i + 1 , j ), f (i, j
{
−
if i = j if i > j if a i = a j
− 1)}, otherwise
2
O(n )
15-3 Bitonic euclidean traveling-salesman problem First, sort the points by x-coordinate,
(xP < xP +1 ) i
left
−−→
i
right
−−−→
Let P i,j (i < j ) denote the bitonic path: P i P 1 P j (a vertex-disjoint path) It means that the bitonic path P i,j (i < j ) includes all vertices P 1 , P 2 , , P j Let f (i, j ) ( i j n ) be the length of the shortest bitonic path P ( i, j ) Recursive formulation:
···
≤ ≤
− 1) + |P − | f (i − 1, i) = min {f (k, i − 1) + |P P |} ≤ − f (i, i) = f (i − 1, i) + |P − P | f (i, j ) = f (i, j
j 1,j
k
1 k
Copyright by Yinyanghu
i 1
if (i < j
− 1)
i
i
49
initial case:
| | Running time: O(n ) f (1, 2) = P 1 P 2 2
15-4 Printing neatly Let cost(i, j ) denote the number of extra space characters at the end of the line which includes the i j -th word Therefore,
·· ·
∞ −
j
− j + i
0,
cost(i, j ) =
if j = n and M
j
− j + i
,
(M j + i
if M j
� −
� − ≥ lk
0
k =i
� −
lk < 0
k=i
lk )3 , otherwise
k=i
Let f (i) be the minimum number of extra space characters after placing first i th words Recursive formulation:
{
}
f (i) = min f ( j ) + cost( j + 1, i) 0 j
≤
initial case:
f (0) = 0 Running time: Space: (n)
O
2
O(n )
15-5 Edit distance a. Let X i = x[1..i] and Y i = y [1..y ] and f (i, j ) denote the minimum cost of transformating X i to Y j Recursive formulation:
f (i, j ) = min
− 1, j − 1) + cost(copy), f (i − 1, j − 1) + cost(replace), f (i − 1, j ) + cost(delete) f (i, j − 1) + cost(insert) f (i − 2, j − 2) + cost(twiddle), min {f (k, n)} + cost(kill ), ≤ f (i
0 k
Copyright by Yinyanghu
if x[i] = y [ j ]
̸
if x[i] = y [ j ]
≥ 2, x[i] = y [ j − 1], and x[i − 1] = y [ j ]
if i, j
if i = m and j = n
50
initial case:
f (0, 0) = 0 Running time and Space:
O(nm)
b.
cost(copy) =
−1
cost(replace) = 1 cost(delete) = 2 cost(insert) = 2 maximize
→ minimize
15-6 Planning a company party Let f (x, 0) denote the maximum sum of the conviviality ratings of the guests when the emplyee x doesn’t attend the party and f (x, 1) denote the maximum sum of the conviviality ratings of the guests when the emplyee x attends the party Recursive formulation:
f (x, 0) =
∈
� �
{
max f (y, 0), f (y, 1)
{}
y Son x
f (x, 1) =
∈
}
f (y, 0) + rating(x)
{}
y Son x
initial case: if x is a leaf,
f (x, 0) = 0 f (x, 1) = rating(x) Running time:
O(V + E )
15-7 Viterbi algorithm a. Let f (x, i) denote whether there exist a path in G that begins at x and has a sequence < σ1 , σ2 , , σi > as it label Recursive formulation:
· ··
∨
f (x, i) =
y s.t. σ (y,x)=σi
f (y, i
− 1)
initial case:
f (x, 0) = true Running time:
2
O(kn )
Copyright by Yinyanghu
51
b. Let f (x, i) denote the maximum probability of a path in G that begins at x and has a sequence < σ1 , σ2 , , σi > as it label Recursive formulation:
···
f (x, i) = Running time:
max
y s.t. σ (y,x)=σi
{f (y, i − 1) · p(y, x)}
2
O(kn )
15-8 Image compression by seam carving a.
O(3
m
)
×
b. Let f (i, j ) denote the seam with the lowest disruption measure of i j array A[1..i, 1..j ] Recursive formulation:
{ − 1, j − 1), f (i − 1, j ), f (i − 1, j + 1)} + d[i, j ]
f (i, j ) = min f (i
The answer is find the i such that minimize f (m, i) Running time: (nm)
O
Problem 15-9 Breaking a string Let L[0] = 0 and L[m + 1] = n Let f (i, j ) denote the lowest cost of breaks that breaking every break points in L[i..j 1] Recursive formulation:
−
{
}
f (i, j ) = min f (i, k) + f (k, j ) + (L[ j ] i
− L[i])
initial case:
f (i, i + 1) = 0 The answer is f (0, m + 1) Running time:
3
O(m )
Problem 15-10 Planning an investment strategy a. (Explicit) b. Let f (k, i) denote in k years and we inveset the ith investment at the k th year, the maximum amount of the monney Recursive formualation:
{ − 1, i) − f , f (k − 1, j ) − f } + d · r
f (k, i) = max f (k j =i
̸
Copyright by Yinyanghu
1
2
ik
52
c. Running time:
2
2
O(years · inverstments ) = O(n )
d. (Omit!)
Problem 15-11 Inventory planning x
Let D (x) =
�
di and D (0) = 0, therefore, D (n) = D
i=1
Let f (i, j ) denote the minimum cost of first i th month that manufactured j machines Recursive formulation:
f (i, j ) =
min
{f (i−1, k)+max{0, j−k−m}·c}+h( j−D(i)), for D ≥ j ≥ D (i)
j k D(i 1)
≥≥ −
The answer is f (n, D) Running time: (nD2 )
O
Problem 15-12 Signing free-agent baseball players For every free-agent player x, let pos(x) denote the player’s position, cost(x) denote the amount of money it will cost to sign the player, and vorp(x) denote the player’s VORP Let f (i, j ) denote the maximum total VORP of the players signed for first i position and used j money Recursive formulation:
f (i, j ) =
max
k s.t. pos(k)=i
{f (i − 1, j ), f (i − 1, j − cost(k)) + vorp(k)}
The answer is f (N, X ) Running time: (max(N, P ) X )
O
Copyright by Yinyanghu
·
53
Chapter 17 Problem 17-1 Bit-reversed binary counter a. (Trivial!) The running time of the bit-reversal permutation on an array of length n = 2k in (nk) time
O
b. It is the same as the procedure Increment for a binary counter. Therefore, the bit-reversal permutation on an n-element array to be performed in a total of (n) time
O
c. Yes
Problem 17-2 Making binary search dynamic a. Using binary search to search each sorted array one by one, until find the element that we search The running time of the worst case:
T (n) =Θ(log 1 + log 2 + log 22 +
· ··
=Θ(0 + 1 + + (k 1 =Θ( k(k 1)) 2 =Θ(log2 n)
− 1))
k 1
· ·· + log 2 − )
−
Therefore, the running time of the worst case is Θ( log2 n) b.
1. create a new sorted array of size 1, A ′0 , containing the new element to be inserted 2. if A0 is empty, then we replace A0 by A′0 ; Otherwise, we merge the two sorted array into another sorted array, A′1 3. repeat these step, until for some i that Ai is empty The amortized running time of the worst case: We use accounting method to analyse the running time. We can charge k to insert an element. 1 pays for the insertion, and we put (k 1) on the inserted from to pay for it being involved in merges later on. And it can move to a higher-indexed array at most k 1 times, so the k 1 on the item suffices to pay for all the times it will never be involved in merges. Therefore, the amortized running time of the worst case is (log n)
−
−
−
O
Copyright by Yinyanghu
54
c.
1. find the smallest j for which the array A j with 2 j elements is full Let y be the last element of A j 2. find x, assume it is in Ai 3. delete x from Ai and insert y into A i with correct place 4. divide A j (left 2 j
− 1 elements) into A , A , ··· , A − 0
1
j 1
The running time is Θ( n)
Problem 17-3 Amortized weight-balanced trees a.
1. list all the elements in the subtree rooted at x by inorder tree walk, i.e. a sorted list 2. rebuild the subtree so that it becomes 1/2 -balanced The running time of the algorithm is Θ( x. size) and use storage
O(x. size) auxiliary
O
O
b. Since the height of the α-balanced binary search tree is (log 1 n) = (log n) Therefore, performing a search in an n -node α -balanced binary search tree takes (log n) worst-case time α
O
c. (Trivial!) Any binary search tree has nonnegative potential and that a 1/2 balanced tree has potential 0 d.
c =
1 2α
−1
e. After insert or delete a node from an n -node α -balanced tree, we use amortized time to rebalance the tree
O(1)
Problem 17-4 The cost of restructuring red-black trees (From CLRS Solution) a. (Omit!) b. All cases except or case 1 of RB-INSERT-FIXUP and case 2 of RB-DELETE-FIXUP are terminating c. Case 1 of RB-INSERT-FIXUP decreases the number of the red nodes by 1. Hence, Φ(T ′ ) = Φ(T ) 1
−
Copyright by Yinyanghu
55
d. Line 1-16 of RB-INSERT-FIXUP causes one node insertion and a unit increase in potential. The nonterminating case of RB-INSERT-FIXUP causes there color changing and decreases the potential by 1. The terminating case of RB-INSERT-FIXUP causes one rotation each and do not affect the potential. e. By the assumption that 1 unit of potenital can pay for the structural modifications performed by and of the three cases of RB-INSERT-FIXUP, we can conclude that the amortized number of structural modifications performed by any call of RB-INSERT is (1)
O
f. Φ(T ′ ) Φ( T ) 1 for all nonterminating cases of RB-INSERT-FIXUP. The amortized number of structural modifications performed by any call of RB-INSERT-FIXUP is (1)
≤
−
O
g. Φ(T ′ ) Φ( T ) 1 for all nonterminating cases of RB-DELETE-FIXUP. The amortized number of structural modifications performed by any call of RB-DELETE-FIXUP is (1)
≤
−
O
h. Since the amortized number of strutural modification in each operation is (1), any sequence of m RB-INSERT and RB-DELETE operations performs (m) structural modifications in the worst case
O
O
Problem 17-5 Competitive analysis of self-organizing lists with move-to-front a. (Trivial!) The worst-case costs for H on an access sequence σ is C H (σ) = Ω(mn) b. (Trivial!) By the defination of the rankL (x) and c i , we have
·
ci = 2 rankL (x)
−1
c. (Trivial!) By the defination of the L∗i , t ∗i and c ∗i , we have
c∗i = rankL
∗
i−1
(x) + t∗i
d. Since an adjacent transposition change only one inversion, and Φ(Li ) = 2q i , therefore, a transposition either increases the potential by 2 or decreases the potential by 2 e. (Trivial!) rankL
i−1
Copyright by Yinyanghu
| | | |
(x) = A + B + 1 and rankL
∗
i−1
| | | |
= A + C + 1 56
f. For analysing the potential function of list L i , i.e. Φ(Li ), we first assume that L∗i = L∗i−1 , therefore, Φ(Li ) Φ( Li−1 ) = 2( A B ). Then, the ∗ ∗ transposition from list Li−1 to Li , it at most increases potential by 2 t∗i . Thus, we have
−
Φ(Li )
| | −| |
·
− Φ(L − ) ≤ 2(|A| − |B| + t∗) i 1
i
g.
− Φ(L − ) =2(|A| + |B | + 1) − 1 + Φ(L ) − Φ(L − ) ≤2(|A| + |B| + 1) − 1 + 2(|A| − |B| + t∗) =4 |A| + 1 + 2t∗ ≤4(|A| + |C | + 1 + t∗)
cˆi = ci + Φ( Li )
i 1
i
i 1 i
i
i
=4c∗ i
h. (Trivial!) The cost C M T F (σ) of access sequence σ with move-to-front is at most 4 times the cost C H (σ) of σ with any other heuristic H, assuming that both heuristics start with the same list
Copyright by Yinyanghu
57
V Advanced Data Structures
Copyright by Yinyanghu
58
VI Graph Algorithms
Copyright by Yinyanghu
59
Chapter 22 Problem 22-1 Classifying edges by breadth-first search a.
1. BFS back edge forward edge BFS BFS Tree tree edge 2. tree edge (u, v ) v. π = u BFS v. d = u. d + 1 3. (u, v ) cross edge u v (u, v ) tree edge Lemma 22.3 v. d u. d+1 Corollary 22.4 v. d u. d v. d = u. d or v. d = u. d + 1
≤
≥
b.
1. BFS forward edge tree edge 2. 3. BFS (u, v ) v. d
≤ u. d + 1
4. v v. d 0 (u, v ) back edge v u v. d u. d 0 v. d u. d
≤
≥
≤
≤
Problem 22-2 Articulation points, bridges, and biconnected components a. Gπ root root articulation points Gπ root root Gπ articulation points b. v articulation points v back edge v v v v articulation points
c. DFS
O(V + E ) = O(E ) low d. Vetex v is an articulation points iff. ( v. low ≥ v. d) or (v is the root of G and v has at least two children in G ). The running time of the algorithm is O(V + E ) = O(E ) π
π
e. (u, v ) G bridge (u, v ) G simple cycle (u, v ) u v path u v (u, v ) bridge (u, v ) simple cycle
Copyright by Yinyanghu
60
f. Edge e = (u, v ) is a bridge iff. ( e is a tree edge) and ( u. d < v. low). The running time of the algorithm is (V + E ) = (E )
O
O
∼
g. Equivalence relation: e1 e 2 iff. (e1 = e 2 ) or (e1 and e 2 lie on some common simple cycle). Therefore, the biconnected components of G partition the nonbridge edges of G . h. DFS stack articulation point bridge pop bcc (V + E ) = (E )
O
O
Problem 22-3 Euler tour a. v in-degree(v ) = out-degree(v ) v cycle v Euler tour b. cycle cycle Euler tour
Problem 22-4 Reachability GT L(v) GT DFS v DFS Tree root u min(v) = u (V + E )
O
Copyright by Yinyanghu
61
Chapter 23 Problem 23-1 Second-best minimum spanning tree a. Since a graph has a unique minimum spanning tree if, for every cut of the graph, there is a unique light edge crossing the cut(Exercise 23.1-6). Therefore, For a graph that all edge weights are distinct, the minimum spanning tree is unique. But the second-best minumum spanning tree need not be unique. Here is an example,
b. MST Second-best Minimum Spanning Tree T G Minimum Spanning Tree Second-best Minimum Spanning Tree T ′ T (u, v ) T T ′ (u, v ) T ′ cyclecycle T ′ T (x, y ) w(u, v ) < w(x, y ) w(u, v ) > w(x, y ) (x, y ) T cycle (x, y ) T ′′ = T (u′ , v ′ ) cycle T T ′ (u′ , v ′ ) ′ Spanning Tree T MST w(x, y ) > w(u , v ′ ) w(u, v ) > w(x, y ) > w(u′ , v ′ ) (u, v ) ′ (x, y ) (u, v ) w(T ′ ) Spanning Tree T MST T T ′ Second-best Minimum Spanning Tree
−
−
−
−{
−{
}∪{
}∪{
}
}
∈ V . The running time is
c. For each vertex u, using DFS to find max [u, v ], v (V 2 )
O
d.
O 2. Compute max [u, v ] for all u, v ∈ V ........O(V ) 3. Find an edge (u, v ) ∈ / T , that minimizes w(u, v )−w(max[u, v ])........O(E ) The Second-best minimum spanning tree is T − {max[u, v ]} ∪ {(u, v )} The running time of the algorithm is O (V ) 1. Find Minimum Spanning Tree of G ........ (E + V log V ) 2
2
Copyright by Yinyanghu
62
Problem 23-2 Minimum spanning tree in sparse graphs a. A G′ MST T ′ T ∪ {(x, y ).orig : (x, y ) ∈ A } G MST b. Obviously |G′. V | ≤ |V | /2 c. Using disjoint set, we can implement MST-REDUCE so that it runs in time
O(E )
d. ??? e. ??? f. ???
Problem 23-3 Bottleneck spanning tree a. (contrapositive) G Spanning Tree T Bottleneck Spanning Tree T G Minimum Spanning Tree e T e T 1 , T 2 T Bottleneck Spanning Tree e′ T 1 , T 2 w(e′ ) < w(e) T G Minimum Spanning Tree b. G b b Bottleneck Spanning Tree c. b Part(b) + E + Bottleneck Spanning TreeRunning Time (E + E 2 4 ...) = (E ) Any better solution ???
O
O
Problem 23-4 Alternative minimum-spanning-tree algorithms a. Correct! By the conclusion of Exercise 23.1-5 The running time of the algorithm is b. Wrong! The running time of the algorithm is c. Correct! By the conclusion of Exercise 23.1-5 The running time of the algorithm is Copyright by Yinyanghu
O(E (E + V )) O(Eα (V )) O(V E ) 63
Any better solution ???
Copyright by Yinyanghu
64
Chapter 24
Copyright by Yinyanghu
65
Chapter 25 Problem 25-1 Transitive closure of a dynamic graph
∈ V , if t
a. Let the edge (u, v ) is inserted into G. For all i, j we have t ij = 1 The running time is (V 2 )
O
iu
∧
tvj = 1, then
→ → →
b. Consider a example, the original graph is a chain, i.e. G : v 1 v 2 ... vn . The matrix of the connectivity T is a upper-triangular matrix. Now we insert an edge ( vn , v1 ), then the matrix of the connectivity T is an all one matrix. The changing value is at least Ω(V 2 ). Therefore, a graph G and an edge e such that Ω(V 2 ) time is required to update the transitive closure after the insertion of e into G , no matter what algorithm is used. c. The algorithm: Insert(u, v ) 1 2 3 4 5
| |
for i = 1 to V if t iu = 1 and t iv = 0 for j = 1 to V if t vj = 1 tij = 1;
| |
Since the condition of tiv = 0 is true at most running time of the algorithm is (V 3 )
O
2
O(V ) times, therefore, the
Problem 25-2 Shortest paths in ϵ-dense graphs a. In a d -ary min-heap, the asymptotic running time of Insert: Θ(logd n) Extract-Min: Θ(d logd n) Decrease-Key: Θ(logd n) If we choose d = Θ(nα ) or some constant 0 < α running time of Insert: Θ( α1 ) Extract-Min: Θ( αd ) = Θ( nα ) Decrease-Key: Θ( α1 )
≤ 1, the asymptotic
α
b. d = V ϵ Copyright by Yinyanghu
66
c. To solve the all-pairs shortest-paths problem on an ϵ -dense directed graph G = (V, E ) with no negative-weight edges, we can run V times Dijkstra’s algorithms described above, the total running time of the algorithm is (V E )
| |
O
d. Using Johnson’s algorithm to reweight the graph G = (V, E ), and then execute the above algorithm. The running time of the algorithm is (V E + V E ) = (V E )
O
Copyright by Yinyanghu
O
67
Chapter 26 Problem 26-1 Escape problem a. For all vertices v in graph G = (V, E ), we can split it into two corresponding vertices v0 , v1 and there is a edge between two vertices (v0 , v1 ) Now we rebuild the graph G V ′ = v0 , v1 : if v V and E ′ = (v0 , v1 ) : if v V (u1 , v0 ) : if (u, v ) E . We also define the capacity function, c ′ (u, v ) for all (u, v ) E ′
{ {
∈ } ∈ } ∪ {
c′ (v0 , v1 ) = c (v ), if v
∈ }
∈ V
∈
c′ (u1 , v0 ) = c (u, v ), if (u, v ) c′ (u, v ) = 0, otherwise
∈ E
Thus, the vertex capacities problem is reduced to an ordinary maximumflow problem on a flow network b. The graph G = (V, E ), defined as V = s, t vi,j : for every i, j = 1, 2, , n and (v, t) : if v is a boundary point E = (s, v ) : if v is a starting point (u, v ) : if u, v are adjacent points on the grid The capacity function for edges is c (u, v ) = 1, for all (u, v ) E
{
{ }∪{ {
··· } }∪{ }
2
∈
2
| | O(n ) and |E | = O(n
It’s easy to find that V =
}∪
+ m)
A good implement algorithm for network flow can solved this problem in (n6 ) time
O
Problem 26-2 Minimum path cover a. To find a minimum path cover of a directed acyclic graph G = (V, E ), we construct the graph G′ = (V ′ , E ′ ), where V ′ = x0 , x1 , , xn y0 , y1 , , yn , ′ (yi , y0 ) : i V (xi , y j ) : (i, j ) E , E = (x0 , xi ) : i V ′ and all edge in E have an unit capacity. Then the maximum-flow of graph G′ is f , and the minimum path cover of the graph G is V f The running time of the algorithm is (V E )
{ { ||
···
}∪{ ∈ }∪{
· ·· } ∈ }∪{ O
∈ } | |−| |
b. The above alogorithm doesn’t work for directed graphs that contain cycles
Copyright by Yinyanghu
68
Problem 26-3 Algorithmic consulting a. Since the cut (S, T ) of G is a finite-capacity, then there is no such edge where the capacity is infinite, i.e. edge ( Ai , J j ) / (S, T ) for all i = 1, 2, , n, j = 1, 2, ,m
∈
···
b. The maximum net revenue is c.
∑
i
P i
· ··
− |c|
1. find the maximum flow of the graph G, and obtain the residual network Gf , ((n + m)2 (n + m + r))
O
·
2. find the minimum cut set (S, T ) by DFS the residual,
∈ ∈
3. if edge (s, Ai ) ( S, T ), then we hire expert i if edge (J i , t) / ( S, T ), then we accept the job i (n + m)
O
O(n + m + r)
The total running time is determined by the implemention of the maximum flow algorithm
Problem 26-4 Updating maximum flow a. Just execute one more iteration of Ford-Fulkerson algorithm. Argument: If the edge (u, v ) does not cross a minimum cut, then incresing the capacity of the edge ( u, v ) does not change the capacity of the minimum cut as well as maximum flow. If the edge ( u, v ) does cross a minimum cut, then it increases the capacity of the minimum cut as well as maximum flow at most 1. The both cases need only find augmenting path one more time.
O(V + E ) 1. the new flow f ′ (u, v ) = f (u, v ) − 1
The running time of the algorithm is b.
2. if there is a augmenting path from u to v , then augment an unit flow along the augmenting path otherwise, find augmenting paths from u to s and from t to v , decreasing an unit flow along the augmenting path
The running time of the algorithm is
O(V + E )
Problem 26-5 Maximum flow by scaling
| | | |
a. Since C = max(u,v)∈E c(u, v ), and there are at most E edges in graph G , therefore, a minimum cut of G has capacity at most C E Copyright by Yinyanghu
69