Test 7 - Induction and Divide and Conquer

CS 512, Algorithm Design, Spring 2000


  1. Given a sequence of numbers S = x1, ..., xn, n > 0, an up-sequence of S is a subsequence xi, ..., xj, 1 <= i <= j <= n, in which xk <= xk + 1 for i <= k < j. Use induction to design a theta(n) algorithm to find the longest up-sequence in S.


    The induction hypothesis is

    Hyp: P(n) = It is possible to find the longest up-sequence in any n-element sequence in theta(n) time.
    The base case is easy:
    Bas: P(1) - Return the one-element sequence; this is certainly correct and can be done in theta(n) - that is, constant - time.
    The inductive step requires a bit of care on two points: how to reduce the sequence from i + 1 to i elements, and how to add the (i + 1)st element back into the sequence.

    On the first point, the (i + 1)st element can be removed from the an end of the sequence, or from somewhere in the middle. Removing an element from the middle of the sequence seems to complicate the problem: it either creates two subsequences which need to be solved - as in a pure divide and conquer solution - or requires that the two subsequences be joined to form a single, i-element sequence, which creates a non-trivially different problem from the original problem. Given these potential complications, it seems safer and simpler to remove one of the sequence ends.

    But the first point is still open: which sequence end to remove? The problem doesn't exhibit any apparent directional bias, and our experience doesn't favor one choice over the other; the choice seems arbitrary and entirely up to us. Let's remove the right end of the sequence.

    On to the second point. If the induction hypothesis holds and we've found a longest up-sequence in the i-element sequence, how does adding xi + back to the right of the sequence change the answer? Either it changes the answer or it doesn't. If it doesn't change the answer, then the longest up-sequence in the i-element sequence is also the longest up-sequence in the (i + 1)-element sequence. How do we know when xi + 1 doesn't change the longest up-sequence? When xi > xi + 1; that is, when xi + 1 ends any previous up-sequence.

    How does xi + 1 change the answer? When it extends the longest up-sequence, creating a new longest up-sequence. And when does that happen? When xi <= xi + 1 and when the right end of the longest up-sequence ends at xi. But now we have a problem: there's no guarantee that the longest up-sequence occupies the right-most part of the i-element sequence; it could occupy any part of the sequence.

    Is this really a problem? Is it possible to change the longest up-sequence by adding xi + 1 when the longest up-sequence doesn't occupy the right-most part of the sequence? A little thought shows the answer is "yes": if xi <= xi + 1 adding xi + 1 always increases the up-sequence occupying the right-most part of the sequence. If the right-most up-sequence grows to be larger than the longest up-sequence, the right-most up-sequence becomes the new longest up-sequence.

    Solving this problem not only requires keeping track of the longest up-sequence, but also keeping track of the right-most up-sequence. We have to adjust the hypothesis accordingly:

    Hyp: P'(n) = It is possible to find both the longest up-sequence and right-most up-sequence in any n-element sequence in theta(n) time.
    The base case is the same as before, but now the single element represents both the longest and right-most up-sequence. The inductive step is
    Ind: Assume P'(i) is true for i >= 1 and consider P'(i + 1). By the inductive hypothesis, we can find the longest and right-most up-sequence for the i element sequence in theta(n) time. There are two cases to consider:

    1. xi > xi + 1 - xi + 1 becomes the new the right-most up-sequence of length 1.

    2. xi <= xi + 1 - the right-most up-sequence grows in length by one.

    Both cases take constant time.

    Once the right-most up-sequence has been adjusted, it is compared with the longest up-sequence, and if longer, replaces it; otherwise, the longest up-sequence remains unchanged. This also takes constant time.

    P'(i + 1) holds.


  2. Use divide and conquer to derive an algorithm to multiply two n-bit binary numbers. Assume n is a power of 2. Is your algorithm asymptotically more efficient than the standard multiplication algorithm in terms of one-bit multiplications?


    Two n-bit binary integers A and B can be divided into four n/2-bit binary integers:

    A=Al2n/2 + Ar
    B=Bl2n/2 + Br
    The product AB becomes
    AB=(Al2n/2 + Ar) (Bl2n/2 + Br)
    =AlBl2n + AlBr2n/2 + ArBl2n/2 + ArBr
    A single n-bit multiplication has been changed into four n/2-bit multiplications.

    The base case occurs for 1-bit integers:

    AB=1if A = B = 1
    =0otherwise

    The recurrence relation for the number of 1-bit multiplications in an n-bit multiply is

    T(n)=4T(n/2)
    T(1)=1
    which unrolls into
    T(n)=4T(n/2)
    =4log2 n
    =22log2 n
    =2log2 n2
    =n2
    which is no better than the usual multiplication algorithm.



This page last modified on 2 May 2000.