Test 4 - Asymptotic Analysis and Computational Complexity

CS 512, Algorithm Design, Spring 2000


  1. True or false: 2n + 1 + Omega(1) = O(2n). Explain your answer.


    It is false. If it were true, there would exist a positive real constant c1 and a non-negative integer constant N1 such that, for all n >= N1,

    (1) 2n + 1 + f(n) <= c12n
    where f(n) is an arbitrary function in Omega(1). Dividing through by 2n gives
    2 + f(n)/2n <= c1
    The only thing we know about f(n) is that there exists a positive real constant c2 and a non-negative integer constant N2 such that, for all integers n >= N2, f(n) >= c2. Unfortunately, this is not enough to guarentee that inequality (1) always holds, because, for example, if f(n) = 22n, then f(n)/2n = 2n and
    2 + 2n <= c1
    doesn't hold for any c1 and N1.

    The most common error in answering this question was asserting that Omega(1) was a constant that could be ignored. Theta(1) is a constant and may be ignored; Omega(1) is a constant lower bound and needs to be dealt with.


  2. A tree is a connected, undirected graph G having one more vertex than edge: |G.V| = |G.E| + 1. Let the set Tn be the set of all n-node, connected, undirected graphs G such that |G.E| = Theta(|G.V|). Does Tn represent the set of all n-node trees? Explain your answer.


    No, it doesn't. The problem is with the constants c1 and c2, which may take on any value necessary to make the inequalities work. For example, let Tn be the set of n-node trees, and let every graph in the set Tu be the union of two graphs (that is, trees) from Tn. A graph in Tu has anywhere from n - 1 to 2(n - 1) edges, and, for some suitably small positive real constant c1

    c1|G.V| <= |G.E| <= 2|G.V|
    for any graph G in Tu. This means |G.E| = Theta(|G.V|) but Tu does not represent a set of trees.

    Most wrong answers for this question came about because people got it backwards: they assumed Tn was a set of trees and then showed |G.E| = Theta(|G.V|). The question asks you to assume |G.E| = Theta(|G.V|) and then show that Tn is a set of trees (or not).


  3. Derive a tight, worst-case asymptotic upper-bound on the time to insert a single element into into a binary heap implemented as an array. Derive a tight, worst-case asymptotic upper-bound on the time to insert a single element into a binomial heap. Based on your derivations, which heap would you expect to have the shorter insertion time?


    The worst-case insertion into a binary heap occurs when inserting the new smallest (or largest) element in the heap. The new element has to bubble up to the top of the heap, which always takes log n time when there are n elements already in the heap. The tightest, worst-case asymptotic estimate for inserting an element into an n-element binary heap is Theta(log n).

    The worst-case insertion into a binomial heap takes place when there are n = 2i - 1 elements in the heap, in which case inserting a new element takes log n time. However, half them time - whenever n is even - inserting a new element into a binomial heap takes constant time. The tightest, worst-case asymptotic estimate for inserting an element into an n-element binary heap is O(log n).

    Because inserting into a binary heap has a specified, non-trivial lower bound and inserting into a binomial heap doesn't, the expectation is that inserting into a binomial heap could be faster than inserting into a binary heap. However, an accurate assessment of relative speeds requires a more detailed look at the constants used in the asymptotic analysis.

    Among the people who remembered how heap insertion worked, the biggest problem in answering this question was in not deriving tight worst case bounds; almost everyone wrote that the tight worst-case bound on binary insertion was O(log n).


  4. For a given integer constant i > 2, is computing in a problem in NP? Explain your answer. Assume arbitrary precision binary numbers.


    If computing in is a problem in NP, then it should be possible to verify the computation in polynomial time; that is, we should be able to determine if x = in in polynomial time.

    x = in can be verified as follows:

    bool ver_exp(x, i, n)
      for j <- 1 to n
        x <- x/i
      return x = 1
    
    The loop iterates n times, but how long does the division take? An equivalent question is "how many digits does x have?" If x has a polynomial number of digits, then x/i takes polynomial time and so does the verification. However, if x has a super-polynomial number of digits, then the division does too and verification is not polynomial.

    The log base b of a number gives the number of base-b digits needed to represent the number. In this case

    log2 x = log2 in = n log2 i
    which is polynomial in n. ver_exp() has polynomial worst-case running time, and computing in is in NP.

    Using the same argument about the length of x, you also could have argued that the computation

    int exp(i, n)
      x <- i
      for j <- 1 to n
        x <- x*i
      return x
    
    has polynomial running time under the perfect choice or nondeterminsitic choice model of computation.

    The biggest problem with answers for this question was confusing a computation that takes exponential time and a computation that computes an exponential value. Computing an exponential value does not necessarily take exponential time.



This page last modified on 29 March 2000.